_id
stringlengths
40
40
title
stringlengths
8
300
text
stringlengths
0
10k
cc5276141c4dc2c245d84e97ad2adc90148be137
The Business Model as a Tool of Improving Value Creation in Complex Private Service System-Case : Value Network of Electric Mobility
This paper shows how the concept of business model can be used as an analysis tool when attempting to understand and describe the complexity of the value creation of a developing industry, namely electric mobility. The business model concept is applied as a theoretical framework in action research based and facilitated workshops with nine case companies operating in the field of electric mobility. The concept turned out to be a promising tool in creating a better understanding of the value creation in electric mobility, and thus being a potential framework for the development work for the actors in that field. The concept can help companies to improve cooperation with other players by being a common terminology.
208986a77f330b6bb66f33d1f7589bd7953f0a7a
Exact robot navigation using artificial potential functions
We present a new methodology for exact robot motion planning and control that unifies the purely kinematic path planning problem with the lower level feedback controller design. Complete information about the freespace and goal is encoded in the form of a special artificial potential function a navigation function that connects the kinematic planning problem with the dynamic execution problem in a provably correct fashion. The navigation function automatically gives rise to a bounded-torque feedback controller for the robot's actuators that guarantees collision-free motion and convergence to the destination from almost all initial free configurations. Since navigation functions exist for any robot and obstacle course, our methodology is completely general in principle. However, this paper is mainly concerned with certain constructive techniques for a particular class of motion planning problems. Specifically, we present a formula for navigation functions that guide a point-mass robot in a generalized sphere world. The simplest member of this family is a space obtained by puncturing a disc by an arbitrary number of smaller disjoint discs representing obstacles. The other spaces are obtained from this model by a suitable coordinate transformation that we show how to build. Our constructions exploit these coordinate transformations to adapt a navigation function on the model space to its more geometrically complicated (but topologically equivalent) instances. The formula that we present admits sphere-worlds of arbitrary dimension and is directly applicable to configuration spaces whose forbidden regions can be modeled by such generalized discs. We have implemented these navigation functions on planar scenarios, and simulation results are provided throughout the paper. Disciplines Robotics Comments Copyright 1992 IEEE. Reprinted from IEEE Transactions on Robotics and Automation, Volume 8, Issue 5, October 1992, pages 501-518. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. NOTE: At the time of publication, Daniel Koditschek was affiliated with the University of Michigan. Currently, he is a faculty member at the School of Engineering of the University of Pennsylvania. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/323
0e3a73bc01e5cb377b49c11440ba717f33c443ed
AmbiguityVis: Visualization of Ambiguity in Graph Layouts
Node-link diagrams provide an intuitive way to explore networks and have inspired a large number of automated graph layout strategies that optimize aesthetic criteria. However, any particular drawing approach cannot fully satisfy all these criteria simultaneously, producing drawings with visual ambiguities that can impede the understanding of network structure. To bring attention to these potentially problematic areas present in the drawing, this paper presents a technique that highlights common types of visual ambiguities: ambiguous spatial relationships between nodes and edges, visual overlap between community structures, and ambiguity in edge bundling and metanodes. Metrics, including newly proposed metrics for abnormal edge lengths, visual overlap in community structures and node/edge aggregation, are proposed to quantify areas of ambiguity in the drawing. These metrics and others are then displayed using a heatmap-based visualization that provides visual feedback to developers of graph drawing and visualization approaches, allowing them to quickly identify misleading areas. The novel metrics and the heatmap-based visualization allow a user to explore ambiguities in graph layouts from multiple perspectives in order to make reasonable graph layout choices. The effectiveness of the technique is demonstrated through case studies and expert reviews.
cb6be7b2eb8382a85fdc48f1ca123d59d7b003ce
Definition Extraction with LSTM Recurrent Neural Networks
Definition extraction is the task to identify definitional sentences automatically from unstructured text. The task can be used in the aspects of ontology generation, relation extraction and question answering. Previous methods use handcraft features generated from the dependency structure of a sentence. During this process, only part of the dependency structure is used to extract features, thus causing information loss. We model definition extraction as a supervised sequence classification task and propose a new way to automatically generate sentence features using a Long Short-Term Memory neural network model. Our method directly learns features from raw sentences and corresponding part-ofspeech sequence, which makes full use of the whole sentence. We experiment on the Wikipedia benchmark dataset and obtain 91.2% on F1 score which outperforms the current state-of-the-art methods by 5.8%. We also show the effectiveness of our method in dealing with other languages by testing on a Chinese dataset and obtaining 85.7% on F1 score.
63aecf799caa0357c8cc0e40a49116c56703895a
Path Planning with Phased Array SLAM and Voronoi Tessellation
Autonomous vehicles must often navigate environments that are at least partially unknown. They are faced with the tasks of creating a coordinate system to localize themselves on, identify the positions of obstacles, and chart safe paths through the environment. This process is known as Simultaneous Localization and Mapping, or SLAM. SLAM has traditionally been executed using measurements of distance to features in the environment. We propose an angle based methodology using a single phased array antenna and DSP.aimed to reduce this requirement to a single path for each data type. Additionally, our method makes use of rudimentary echo-location to discover reflective obstacles. Finally, our method uses Voronoi Tessellation for path planning.
f3218cff7b9233ebeffa8e912ee0f40cd6da331e
Word-Formation in English
published by the press syndicate of the university of cambridge A catalogue record for this book is available from the British Library Library of Congress Cataloguing in Publication data Plag, Ingo. Word-formation in English / Ingo Plag. p. cm. – (Cambridge textbooks in linguistics) Includes bibliographical references (p.) and indexes. Contents Preface page xi Abbreviations and notational conventions xiii Introduction 1 1 Basic concepts 4 1.1 What is a word? 4 1.2 Studying word-formation 9 1.3 Inflection and derivation 14 1.4 Summary 17 Further reading 18 Exercises 18 2 Studying complex words 20 2.1 Identifying morphemes 20 2.1.1 The morpheme as the minimal linguistic sign 20 2.1.2 Problems with the morpheme: the mapping of form and meaning 22 2.2 Allomorphy 27 2.3 Establishing word-formation rules 30 2.4 Multiple affixation 38 2.5 Summary 41 Further reading 41 Exercises 41
30f0e97d498fc3684761d4eec1ea957887399a9e
Securing SMS Based One Time Password Technique from Man in the Middle Attack
Security of financial transactions in E-Commerce is difficult to implement and there is a risk that user’s confidential data over the internet may be accessed by hackers. Unfortunately, interacting with an online service such as a banking web application often requires certain degree of technical sophistication that not all Internet users possess. For the last couple of year such naive users have been increasingly targeted by phishing attacks that are launched by miscreants who are aiming to make an easy profit by means of illegal financial transactions. In this paper, we have proposed an idea for securing e-commerce transaction from phishing attack. An approach already exists where phishing attack is prevented using one time password which is sent on user’s registered mobile via SMS for authentication. But this method can be counter attacked by “Man in the Middle”. In our paper, a new idea is proposed which is more secure compared to the existing online payment system using OTP. In this mechanism OTP is combined with the secure key and is then passed through RSA algorithm to generate the Transaction password. A Copy of this password is maintained at the server side and is being generated at the user side using a mobile application; so that it is not transferred over the insecure network leading to a fraudulent transaction. Keywords—Phishing, Replay attack, MITM attack, RSA, Random Generator.
e913e0d75d00fb790d2d3e75d2ea6e2645757b2c
Contingencies of self-worth.
Research on self-esteem has focused almost exclusively on level of trait self-esteem to the neglect of other potentially more important aspects such as the contingencies on which self-esteem is based. Over a century ago, W. James (1890) argued that self-esteem rises and falls around its typical level in response to successes and failures in domains on which one has staked self-worth. We present a model of global self-esteem that builds on James' insights and emphasizes contingencies of self-worth. This model can help to (a) point the way to understanding how self-esteem is implicated in affect, cognition, and self-regulation of behavior; (b) suggest how and when self-esteem is implicated in social problems; (c) resolve debates about the nature and functioning of self-esteem; (d) resolve paradoxes in related literatures, such as why people who are stigmatized do not necessarily have low self-esteem and why self-esteem does not decline with age; and (e) suggest how self-esteem is causally related to depression. In addition, this perspective raises questions about how contingencies of self-worth are acquired and how they change, whether they are primarily a resource or a vulnerability, and whether some people have noncontingent self-esteem.
f0d401122c35b43ceec1441c548be75d96783673
A cognitive-affective system theory of personality: reconceptualizing situations, dispositions, dynamics, and invariance in personality structure.
A theory was proposed to reconcile paradoxical findings on the invariance of personality and the variability of behavior across situations. For this purpose, individuals were assumed to differ in (a) the accessibility of cognitive-affective mediating units (such as encodings, expectancies and beliefs, affects, and goals) and (b) the organization of relationships through which these units interact with each other and with psychological features of situations. The theory accounts for individual differences in predictable patterns of variability across situations (e.g., if A then she X, but if B then she Y), as well as for overall average levels of behavior, as essential expressions or behavioral signatures of the same underlying personality system. Situations, personality dispositions, dynamics, and structure were reconceptualized from this perspective.
29f94f5a209294554615d925a53c53b3a9649dd1
AIVAT: A New Variance Reduction Technique for Agent Evaluation in Imperfect Information Games
Evaluating agent performance when outcomes are stochastic and agents use randomized strategies can be challenging when there is limited data available. The variance of sampled outcomes may make the simple approach of Monte Carlo sampling inadequate. This is the case for agents playing heads-up no-limit Texas hold’em poker, where manmachine competitions typically involve multiple days of consistent play by multiple players, but still can (and sometimes did) result in statistically insignificant conclusions. In this paper, we introduce AIVAT, a low variance, provably unbiased value assessment tool that exploits an arbitrary heuristic estimate of state value, as well as the explicit strategy of a subset of the agents. Unlike existing techniques which reduce the variance from chance events, or only consider game ending actions, AIVAT reduces the variance both from choices by nature and by players with a known strategy. The resulting estimator produces results that significantly outperform previous state of the art techniques. It was able to reduce the standard deviation of a Texas hold’em poker man-machine match by 85% and consequently requires 44 times fewer games to draw the same statistical conclusion. AIVAT enabled the first statistically significant AI victory against professional poker players in no-limit hold’em. Furthermore, the technique was powerful enough to produce statistically significant results versus individual players, not just an aggregate pool of the players. We also used AIVAT to analyze a short series of AI vs human poker tournaments, producing statistical significant results with as few as 28 matches.
105e77b8182b6e591247918f45f04c6a67f9a72f
Methods of suicide: international suicide patterns derived from the WHO mortality database.
OBJECTIVE Accurate information about preferred suicide methods is important for devising strategies and programmes for suicide prevention. Our knowledge of the methods used and their variation across countries and world regions is still limited. The aim of this study was to provide the first comprehensive overview of international patterns of suicide methods. METHODS Data encoded according to the International Classification of Diseases (10th revision) were derived from the WHO mortality database. The classification was used to differentiate suicide methods. Correspondence analysis was used to identify typical patterns of suicide methods in different countries by providing a summary of cross-tabulated data. FINDINGS Poisoning by pesticide was common in many Asian countries and in Latin America; poisoning by drugs was common in both Nordic countries and the United Kingdom. Hanging was the preferred method of suicide in eastern Europe, as was firearm suicide in the United States and jumping from a high place in cities and urban societies such as Hong Kong Special Administrative Region, China. Correspondence analysis demonstrated a polarization between pesticide suicide and firearm suicide at the expense of traditional methods, such as hanging and jumping from a high place, which lay in between. CONCLUSION This analysis showed that pesticide suicide and firearm suicide replaced traditional methods in many countries. The observed suicide pattern depended upon the availability of the methods used, in particular the availability of technical means. The present evidence indicates that restricting access to the means of suicide is more urgent and more technically feasible than ever.
65f34550cac522b9cd702d2a1bf81de66406ee24
Active Contour Models for Extracting Ground and Forest Canopy Curves from Discrete Laser Altimeter Data
The importance of finding efficient ways of quantifying terrestrial carbon stocks at a global scale has increased due to the concerns about global climate change. Exchange of carbon between forests and the atmosphere is a vital component of the global carbon cycle (Nelson et al. 2003). Recent advances in remote sensing technology have facilitated rapid and inexpensive measurements of topography over large areas (Zhang et al. 2003).
762f678d97238c407aa5d63ae5aaaa963e1c4c7e
Managing Complexity in Aircraft Design Using Design Structure Matrix
Modern aerospace systems have reached a level of complexity that requires systematic methods for their design. The development of products in aircraft industry involves numerous engineers from different disciplines working on independent components. Routine activities consume a significant part of aircraft development time. To be competitive, aircraft industry needs to manage complexity and readjust the ratio of innovative versus routine work. Thus, the main objective is to develop an approach that can manage complexity in engineering design, reduce the design cycle time, and reduce the product development cost. The design structure matrix (DSM) is a simple tool to perform both analysis and management of complex systems. It enables the user to model, visualize, and analyze the dependencies among the functional group of any system and derive suggestions for the improvement or synthesis of a system. This article illustrates with a case study how DSM can be used to manage complexity in aircraft design process. The result of implementing the DSM approach for Light Combat Aircraft (Navy) at Hindustan Aeronautics Limited, India, (in aircraft design) show primary benefits of 75% reduction in routine activities, 33% reduction in design cycle time, 50% reduction in rework cost, and 30% reduction in product and process development cost of aircraft.
5260a9aa1ba1e46f1acbd4472a2d3bdb175fc67c
Sub-event detection from tweets
Social media plays an important role in communication between people in recent times. This includes information about news and events that are currently happening. Most of the research on event detection concentrates on identifying events from social media information. These models assume an event to be a single entity and treat it as such during the detection process. This assumption ignores that the composition of an event changes as new information is made available on social media. To capture the change in information over time, we extend an already existing Event Detection at Onset algorithm to study the evolution of an event over time. We introduce the concept of an event life cycle model that tracks various key events in the evolution of an event. The proposed unsupervised sub-event detection method uses a threshold-based approach to identify relationships between sub-events over time. These related events are mapped to an event life cycle to identify sub-events. We evaluate the proposed sub-event detection approach on a large-scale Twitter corpus.
6babe6becc5e13ae72d19dde27dc7f80a9642d59
Asynchronous Complex Analytics in a Distributed Dataflow Architecture
Scalable distributed dataflow systems have recently experienced widespread adoption, with commodity dataflow engines such as Hadoop and Spark, and even commodity SQL engines routinely supporting increasingly sophisticated analytics tasks (e.g., support vector machines, logistic regression, collaborative filtering). However, these systems’ synchronous (often Bulk Synchronous Parallel) dataflow execution model is at odds with an increasingly important trend in the machine learning community: the use of asynchrony via shared, mutable state (i.e., data races) in convex programming tasks, which has—in a single-node context—delivered noteworthy empirical performance gains and inspired new research into asynchronous algorithms. In this work, we attempt to bridge this gap by evaluating the use of lightweight, asynchronous state transfer within a commodity dataflow engine. Specifically, we investigate the use of asynchronous sideways information passing (ASIP) that presents single-stage parallel iterators with a Volcano-like intra-operator iterator that can be used for asynchronous information passing. We port two synchronous convex programming algorithms, stochastic gradient descent and the alternating direction method of multipliers (ADMM), to use ASIPs. We evaluate an implementation of ASIPs within on Apache Spark that exhibits considerable speedups as well as a rich set of performance trade-offs in the use of these asynchronous algorithms.
af8a5f3b44be82472ed40f14313e7f7ef9fb148c
Motor-learning-related changes in piano players and non-musicians revealed by functional magnetic-resonance signals
In this study, we investigated blood-flow-related magnetic-resonance (MR) signal changes and the time course underlying short-term motor learning of the dominant right hand in ten piano players (PPs) and 23 non-musicians (NMs), using a complex finger-tapping task. The activation patterns were analyzed for selected regions of interest (ROIs) within the two examined groups and were related to the subjects’ performance. A functional learning profile, based on the regional blood-oxygenation-level-dependent (BOLD) signal changes, was assessed in both groups. All subjects achieved significant increases in tapping frequency during the training session of 35 min in the scanner. PPs, however, performed significantly better than NMs and showed increasing activation in the contralateral primary motor cortex throughout motor learning in the scanner. At the same time, involvement of secondary motor areas, such as bilateral supplementary motor area, premotor, and cerebellar areas, diminished relative to the NMs throughout the training session. Extended activation of primary and secondary motor areas in the initial training stage (7–14 min) and rapid attenuation were the main functional patterns underlying short-term learning in the NM group; attenuation was particularly marked in the primary motor cortices as compared with the PPs. When tapping of the rehearsed sequence was performed with the left hand, transfer effects of motor learning were evident in both groups. Involvement of all relevant motor components was smaller than after initial training with the right hand. Ipsilateral premotor and primary motor contributions, however, showed slight increases of activation, indicating that dominant cortices influence complex sequence learning of the non-dominant hand. In summary, the involvement of primary and secondary motor cortices in motor learning is dependent on experience. Interhemispheric transfer effects are present.
5cfd05faf428bcef6670978adb520564d0d69d32
Analysis of political discourse on twitter in the context of the 2016 US presidential elections
Social media now plays a pivotal role in electoral campaigns. Rapid dissemination of information through platforms such as Twitter has enabled politicians to broadcast their message to a wide audience. In this paper, we investigated the sentiment of tweets by the two main presidential candidates, Hillary Clinton and Donald Trump, along with almost 2.9 million tweets by Twitter users during the 2016 US Presidential Elections. We analyzed these short texts to evaluate how accurately Twitter represented the public opinion and real world events of significance related with the elections. We also analyzed the behavior of over a million distinct Twitter users to identify whether the platform was used to share original opinions and to interact with other users or whether few opinions were repeated over and over again with little inter-user dialogue. Finally, we wanted to assess the sentiment of tweets by both candidates and their impact on the election related discourse on Twitter. Some of our findings included the discovery that little original content was created by users and Twitter was primarily used for rebroadcasting already present opinions in the form of retweets with little communication between users. Also of significance was the finding that sentiment and topics expressed on Twitter can be a good proxy of public opinion and important election related events. Moreover, we found that Donald Trump offered a more optimistic and positive campaign message than Hillary Clinton and enjoyed better sentiment when mentioned in messages by Twitter users.
b1cfe7f8b8557b03fa38036030f09b448d925041
Unsupervised texture segmentation using Gabor filters
-This paper presents a texture segmentation algorithm inspired by the multi-channel filtering theory for visual information processing in the early stages of human visual system. The channels are characterized by a bank of Gabor filters that nearly uniformly covers the spatial-frequency domain, and a systematic filter selection scheme is proposed, which is based on reconstruction of the input image from the filtered images. Texture features are obtained by subjecting each (selected) filtered image to a nonlinear transformation and computing a measure of "energy" in a window around each pixel. A square-error clustering algorithm is then used to integrate the feature images and produce a segmentation. A simple procedure to incorporate spatial information in the clustering process is proposed. A relative index is used to estimate the "'true" number of texture categories. Texture segmentation Multi-channel filtering Clustering Clustering index Gabor filters Wavelet transform I . I N T R O D U C T I O N Image segmentation is a difficult yet very important task in many image analysis or computer vision applications. Differences in the mean gray level or in color in small neighborhoods alone are not always sufficient for image segmentation. Rather, one has to rely on differences in the spatial arrangement of gray values of neighboring pixels-that is, on differences in texture. The problem of segmenting an image based on textural cues is referred to as the texture segmentation problem. Texture segmentation involves identifying regions with "uniform" textures in a given image. Appropriate measures of texture are needed in order to decide whether a given region has uniform texture. Sklansky (o has suggested the following definition of texture which is appropriate in the segmentation context: "A region in an image has a constant texture if a set of local statistics or other local properties of the picture are constant, slowly varying, or approximately periodic". Texture, therefore, has both local and global connotations--i t is characterized by invariance of certain local measures or properties over an image region. The diversity of natural and artificial textures makes it impossible to give a universal definition of texture. A large number of techniques for analyzing image texture has been proposed in the past two decades/2,3) In this paper, we focus on a particular approach to texture analysis which is referred to as ° This work was supported in part by the National Science Foundation infrastructure grant CDA-8806599, and by a grant from E. I. Du Pont De Nemours & Company Inc. the multi-channel filtering approach. This approach is inspired by a multi-channel filtering theory for processing visual information in the early stages of the human visual system. First proposed by Campbell and Robson (4) the theory holds that the visual system decomposes the retinal image into a number of filtered images, each of which contains intensity variations over a narrow range of frequency (size) and orientation. The psychophysical experiments that suggested such a decomposition used various grating patterns as stimuli and were based on adaptation techniques. I~l Subsequent psychophysiological experiments provided additional evidence supporting the theory. De Valois et al. ,(5) for example, recorded the response of simple cells in the visual cortex of the Macaque monkey to sinusoidal gratings with different frequencies and orientations. It was observed that each cell responds to a narrow range of frequency and orientation only. Therefore, it appears that there are mechanisms in the visual cortex of mammals that are tuned to combinations of frequency and orientation in a narrow range. These mechanisms are often referred to as channels, and are appropriately interpreted as band-pass filters. The multi-channel filtering approach to texture analysis is intuitively appealing because it allows us to exploit differences in dominant sizes and orientations of different textures. Today, the need for a multi-resolution approach to texture analysis is well recognized. While other approaches to texture analysis have had to be extended to accommodate this paradigm, the multi-channel filtering approach, is inherently multi-resolutional. Another important
6e412334c5cb6e59a1c7dc2e4594b9c0af52bc97
Warpage control of silicon interposer for 2.5D package application
In order to achieve high speed transmission and large volume data processing, large size silicon-interposer has been required. Warpage caused by the CTE mismatch between a large silicon-interposer and an organic substrate is the most significant problem. In this study, we investigated several warpage control techniques for 2.5D package assembly process. First was assembly process sequence. One is called “chip first process” that is, chips are mounted on Si-interposer at first. The other is called “chip last process” that is, silicon-interposer is mounted on organic substrate at first and chips are mounted on at last. The chip first process successfully processed using conventional mass reflow. By using the chip first process, apparent CTE of a large silicon-interposer become close to that of an organic substrate. Second was the warpage control using underfill resin. We focused on the selection of underfill materials for 0 level assembly. And third was the warpage control technique with Sn-57Bi solder using conventional reflow process. We observed warpage change during simulated reflow process using three-dimensional digital image correlation system (3D-DIC). Sn-57Bi solder joining has been noted as a low temperature bonding methods. It is possible to lower peak temperature 45-90 degree C during reflow compared with using Sn3.0wt%Ag0.5wt%Cu (SAC305). By using Sn-57Bi solder, the warpage after reflow was reduced to 75% of that using SAC305. The full assembly was successfully processed using conventional assembly equipment and processes. The full assembly packages were evaluated by some reliability tests. All samples passed each reliability test.
30ee737e5cc4dd048edf48b6f26ceba5d7b9b1cb
KCT: a MATLAB toolbox for motion control of KUKA robot manipulators
The Kuka Control Toolbox (KCT) is a collection of MATLAB functions for motion control of KUKA robot manipulators, developed to offer an intuitive and high-level programming interface to the user. The toolbox, which is compatible with all 6 DOF small and low payload KUKA robots that use the Eth.RSIXML, runs on a remote computer connected with the KUKA controller via TCP/IP. KCT includes more than 30 functions, spanning operations such as forward and inverse kinematics computation, point-to-point joint and Cartesian control, trajectory generation, graphical display and diagnostics. The flexibility, ease of use and reliability of the toolbox is demonstrated through two applicative examples.
2d69249e404fc7aa66eba313e020b721bb4e6c0b
Using Data Mining Techniques for Sentiment Shifter Identification
Sentiment shifters, i.e., words and expressions that can affect text polarity, play an important role in opinion mining. However, the limited ability of current automated opinion mining systems to handle shifters represents a major challenge. The majority of existing approaches rely on a manual list of shifters; few attempts have been made to automatically identify shifters in text. Most of them just focus on negating shifters. This paper presents a novel and efficient semi-automatic method for identifying sentiment shifters in drug reviews, aiming at improving the overall accuracy of opinion mining systems. To this end, we use weighted association rule mining (WARM), a well-known data mining technique, for finding frequent dependency patterns representing sentiment shifters from a domain-specific corpus. These patterns that include different kinds of shifter words such as shifter verbs and quantifiers are able to handle both local and long-distance shifters. We also combine these patterns with a lexicon-based approach for the polarity classification task. Experiments on drug reviews demonstrate that extracted shifters can improve the precision of the lexicon-based approach for polarity classification 9.25 percent.
5757dd57950f6b3c4d90a342a170061c8c535536
Computing the Stereo Matching Cost with a Convolutional Neural Network Seminar Recent Trends in 3 D Computer Vision
This paper presents a novel approach to the problem of computing the matching-cost for stereo vision. The approach is based upon a Convolutional Neural Network that is used to compute the similarity of input patches from stereo image pairs. In combination with state-ofthe-art stereo pipeline steps, the method achieves top results in major stereo benchmarks. The paper introduces the problem of stereo matching, discusses the proposed method and shows results from recent stereo datasets.
9dcb91c79a70913763091ff1a20c4b7cb46b96fd
Evolution of fashion brands on Twitter and Instagram
Social media has become a popular platform for marketing and brand advertisement especially for fashion brands. To promote their products and gain popularity, different brands post their latest products and updates as photos on different social networks. Little has been explored on how these fashion brands use different social media to reach out to their customers and obtain feedback on different products. Understanding this can help future consumers to focus on their interested brands on specific social media for better information and also can help the marketing staff of different brands to understand how the other brands are utilizing the social media. In this article we focus on the top-20 fashion brands and comparatively analyze how they target their current and potential future customers on Twitter and Instagram. Using both linguistic and deep image features, our work reveals an increasing diversification of trends accompanied by a simultaneous concentration towards a few selected trends. It provides insights about brand marketing strategies and their respective competencies. Our investigations show that the brands are using Twitter and Instagram in a distinctive manner.
ec5c2c30a4bcc66a97e8d9d20ccc4f7616f5505f
Organizational Learning and Communities of Practice ; A social constructivist perspective
In this paper, the relationship between organizational learning (OL) and communities of practice (COP) is addressed. A social constructivist lens is used to analyzing the potential contributions of COP’s in supporting learning by organizations. A social constructivist approach sees organizational learning as an institutionalizing process. The attention is on the process through which individual or local knowledge is transformed into collective knowledge as well as the process through which this socially constructed knowledge influences, and is part of, local knowledge. In order to analyse COP’s contribution to OL, we use the three phases or ‘moments’ described by Berger and Luckman (1966) that can be discerned during institutionalizing knowledge: ‘externalizing, objectifying and internalizing’. Externalizing knowledge refers to the process through which personal knowledge is exchanged with others. Objectifying knowledge refers to the process through which knowledge becomes an objective reality. Internalizing knowledge refers to the process through which objectified knowledge is used by individuals in the course of their socialization. In relation to OL processes, learning can be analyzed as consisting of these three knowledge sharing activities: externalizing individual knowledge resulting in shared knowledge; objectifying shared knowledge resulting in organizational knowledge; internalizing organizational knowledge resulting in individual knowledge. These various processes that in combination make up OL processes, are visualized by the use of a OL cycle. The cycle provides a simplified picture of OL seen as a process of institutionalization. The cycle is subsequently used to analyze the possible contribution of COP’s to support organizational learning. The paper concludes that COP’s are well suited to support processes of internalization and externalization. As a result, COP’s stimulate social learning processes within organizations. However, COP’s do not seem to be the appropriate means to support the process of objectification. This means that COP’s contribution in supporting learning at the organizational level or ‘organizational learning’ is much more complicated.
4b65024cd376067156a5ac967899a7748fa31f6f
The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing
Unbounded, unordered, global-scale datasets are increasingly common in day-to-day business (e.g. Web logs, mobile usage statistics, and sensor networks). At the same time, consumers of these datasets have evolved sophisticated requirements, such as event-time ordering and windowing by features of the data themselves, in addition to an insatiable hunger for faster answers. Meanwhile, practicality dictates that one can never fully optimize along all dimensions of correctness, latency, and cost for these types of input. As a result, data processing practitioners are left with the quandary of how to reconcile the tensions between these seemingly competing propositions, often resulting in disparate implementations and systems. We propose that a fundamental shift of approach is necessary to deal with these evolved requirements in modern data processing. We as a field must stop trying to groom unbounded datasets into finite pools of information that eventually become complete, and instead live and breathe under the assumption that we will never know if or when we have seen all of our data, only that new data will arrive, old data may be retracted, and the only way to make this problem tractable is via principled abstractions that allow the practitioner the choice of appropriate tradeoffs along the axes of interest: correctness, latency, and cost. In this paper, we present one such approach, the Dataflow Model, along with a detailed examination of the semantics it enables, an overview of the core principles that guided its design, and a validation of the model itself via the real-world experiences that led to its development. We use the term “Dataflow Model” to describe the processing model of Google Cloud Dataflow [20], which is based upon technology from FlumeJava [12] and MillWheel [2]. This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing info@vldb.org. Articles from this volume were invited to present their results at the 41st International Conference on Very Large Data Bases, August 31st September 4th 2015, Kohala Coast, Hawaii. Proceedings of the VLDB Endowment, Vol. 8, No. 12 Copyright 2015 VLDB Endowment 2150-8097/15/08.
3f5e13e951b58c1725250cb60afc27f08d8bf02c
A Trusted Safety Verifier for Process Controller Code
Attackers can leverage security vulnerabilities in control systems to make physical processes behave unsafely. Currently, the safe behavior of a control system relies on a Trusted Computing Base (TCB) of commodity machines, firewalls, networks, and embedded systems. These large TCBs, often containing known vulnerabilities, expose many attack vectors which can impact process safety. In this paper, we present the Trusted Safety Verifier (TSV), a minimal TCB for the verification of safety-critical code executed on programmable controllers. No controller code is allowed to be executed before it passes physical safety checks by TSV. If a safety violation is found, TSV provides a demonstrative test case to system operators. TSV works by first translating assembly-level controller code into an intermediate language, ILIL. ILIL allows us to check code containing more instructions and features than previous controller code safety verification techniques. TSV efficiently mixes symbolic execution and model checking by transforming an ILIL program into a novel temporal execution graph that lumps together safetyequivalent controller states. We implemented TSV on a Raspberry Pi computer as a bump-in-the-wire that intercepts all controllerbound code. Our evaluation shows that it can test a variety of programs for common safety properties in an average of less than three minutes, and under six minutes in the worst case—a small one-time addition to the process engineering life cycle.
40c3b350008ada8f3f53a758e69992b6db8a8f95
Discriminative Decorrelation for Clustering and Classification
Object detection has over the past few years converged on using linear SVMs over HOG features. Training linear SVMs however is quite expensive, and can become intractable as the number of categories increase. In this work we revisit a much older technique, viz. Linear Discriminant Analysis, and show that LDA models can be trained almost trivially, and with little or no loss in performance. The covariance matrices we estimate capture properties of natural images. Whitening HOG features with these covariances thus removes naturally occuring correlations between the HOG features. We show that these whitened features (which we call WHO) are considerably better than the original HOG features for computing similarities, and prove their usefulness in clustering. Finally, we use our findings to produce an object detection system that is competitive on PASCAL VOC 2007 while being considerably easier to train and test.
c2505e3f0e19d50bdd418ca9becf8cbb08f61dc1
GUS, A Frame-Driven Dialog System
GUS is the first o f a series o f experimental computer systems that we intend to construct as part o f a program of research on language understanding. In large measure, these systems will fill the role o f periodic progress reports, summarizing what we have learned, assessing the mutual coherence o f the various lines o f investigation we have been following, and saggestin# where more emphasis is needed in future work. GUS (Genial Understander System) is intended to engage a sympathetic and highly cooperative human in an English dialog, directed towards a specific goal within a very restricted domain o f discourse. As a starting point, G US was restricted to the role o f a travel agent in a conversation with a client who wants to make a simple return trip to a single city in California. There is good reason for restricting the domain o f discourse for a computer system which is to engage in an English dialog. Specializing the subject matter that the system can talk about permiis it to achieve some measure o f realism without encompassing all the possibilities o f human knowledge or o f the English language. It also provides the user with specific motivation for participating in the conversation, thus narrowing the range o f expectations that GUS must have about the user's purposes. A system restricted in this way will be more able to guide the conversation within the boundaries o f its competence. 1. Motivation and Design Issues Within its limitations, ous is able to conduct a more-or-less realistic dialog. But the outward behavior of this first system is not what makes it interesting or significant. There are, after all, much more convenient ways to plan a trip and, unlike some other artificial intelligence programs, (;us does not offer services or furnish information that are otherwise difficult or impossible to obtain. The system is i nteresting because of the phenomena of natural dialog that it attempts to model tThis work was done by the language understander project at the Xerox Palo Alto Research center. Additional affiliations: D. A. Norman, University of California, San Diego; H. Thompso6, University of California, Berkeley; and T. Winograd, Stanford University. Artificial Intelligence 8 0977), 155-173 Copyright © 1977 by North-Holland Publishing Company 156 D . G . BOBI~OW ET AL. and because of the principles of program organization around which it was de, Signed. Among the hallmarks of natural dialogs are unexpected and seemingly unpredictable sequences of events. We describe some of the forms that these can take below. "We then go on to discuss the modular design which makes the system re!atively insensitive t o the vagaries of ordinary conversation. 1.1. Problems of natural dialog The simple dialog shown in Fig. 1 illustrates some of the language-understanding problems we attacked. (The parenthesized numbers are for reference in the text). The problems illustrated in this figure, and described in the paragraphs below, include allowing both the client and the system to take the initiative, understanding indirect answers to questions, resolving anaphora, understanding fragments of sentences offered as answers to questions, and interpreting the discourse in the light of known conversational patterns. 1.1.1. Mixed initiative A typical contribution to a dialog, in addition to its more obvious functions, conveys an expectation about how the other participant will respond. This is clearest in the ease of a question, but it is true of all dialog. If one of the participants has very particular expectations and states them strongly whenever he speaks, and ff the other always responds in such a way as to meet the expectations conveyed, then the initiative remains with the first participant throughout. The success of interactive computer systems can often be traced to the skill with which their designers were able to assure them such a dominating position in the interaction. In natural conversations between humans, however, each participant usually assumes the initiative from time to time. Either clear expectations are not stated or simply not honored. GUS attempts to retain the initiative, but not to the extent of jeopardizing the natural flow of the conversation. To this extent it is a mixed-initiative system (see Carbonell [5, 6]). This is exemplified in the dialogue at (1) where the client volunteers more information than GUS requested. In addition to his destination, the client gives the date on which he wants to travel. Line (3) illustrates a ease where the client takes control of the conversation. GUS had found a potentially acceptable flight and asked for the client's approval. Instead of either giving or denying it, the client replied with a question of his own. 1.1.2. Indirect answers It is by no means always clear what constitutes an answer to a question. Frequently the purported answer is at best only a basis on which to infer the information requested. For example, when ous asks "Whatt ime do you want to leave?" it is seeking information to constrain the selection of a flight, client's res onse t o • • P " . this question, a t (2), does constrain the flight selection, b u t only indirectly. In Artificial Intelligence8 (1977), 155--17a : -
35c92fe4f113f09cfbda873231ca51cdce8d995a
Fast Robust Logistic Regression for Large Sparse Datasets with Binary Outputs
Although popular and extremely well established in mainstream statistical data analysis, logistic regression is strangely absent in the field of data mining. There are two possible explanations of this phenomenon. First, there might be an assumption that any tool which can only produce linear classification boundaries is likely to be trumped by more modern nonlinear tools. Second, there is a legitimate fear that logistic regression cannot practically scale up to the massive dataset sizes to which modern data mining tools are This paper consists of an empirical examination of the first assumption, and surveys, implements and compares techniques by which logistic regression can be scaled to data with millions of attributes and records. Our results, on a large life sciences dataset, indicate that logistic regression can perform surprisingly well, both statistically and computationally, when compared with an array of more recent classification algorithms.
3b8e7c8220d3883d54960d896a73045f3c70ac17
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms
We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random elds (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modi cation of the proof of convergence of the perceptron algorithm for classi cation problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger.
a287d39c42eb978e379ee79011f4441ee7de96be
Gratitude and depressive symptoms: the role of positive reframing and positive emotion.
Eight studies (N=2,973) tested the theory that gratitude is related to fewer depressive symptoms through positive reframing and positive emotion. Study 1 found a direct path between gratitude and depressive symptoms. Studies 2-5 demonstrated that positive reframing mediated the relationship between gratitude and depressive symptoms. Studies 6-7 showed that positive emotion mediated the relationship between gratitude and depressive symptoms. Study 8 found that positive reframing and positive emotion simultaneously mediated the relationship between gratitude and depressive symptoms. In sum, these eight studies demonstrate that gratitude is related to fewer depressive symptoms, with positive reframing and positive emotion serving as mechanisms that account for this relationship.
147f0d86753413f65fce359a3a9b9a0503813b8c
A hybrid SoC interconnect with dynamic TDMA-based transaction-less buses and on-chip networks
The two dominant architectural choices for implementing efficient communication fabrics for SoC's have been transaction-based buses and packet-based networks-on-chip (NoC). Both implementations have some inherent disadvantages - the former resulting from poor scalability and the transactional character of their operation, and the latter from inconsistent access times and deterioration of performance at high injection rates. In this paper, we propose a transaction-less, time-division-based bus architecture, which dynamically allocates timeslots on-the-fly - the dTDMA bus. This architecture addresses the contention issues of current bus architectures, while avoiding the multi-hop overhead of NoC's. It is compared to traditional bus architectures and NoC's and shown to outperform both for configurations with fewer than 10 PE's. In order to exploit the advantages of the dTDMA bus for smaller configurations, and the scalability of NoC's, we propose a new hybrid SoC interconnect combining the two, showing significant improvement in both latency and power consumption.
5e214a2af786fadb419e9e169a252c6ca6e7d9f0
Information Extraction: Techniques and Challenges
This volume takes a broad view of information extraction as any method for ltering information from large volumes of text. This includes the retrieval of documents from collections and the tagging of particular terms in text. In this paper we shall use a narrower de nition: the identi cation of instances of a particular class of events or relationships in a natural language text, and the extraction of the relevant arguments of the event or relationship. Information extraction therefore involves the creation of a structured representation (such as a data base) of selected information drawn from the text. The idea of reducing the information in a document to a tabular structure is not new. Its feasibility for sublanguage texts was suggested by Zellig Harris in the 1950's, and an early implementation for medical texts was done at New York University by Naomi Sager[20]. However, the speci c notion of information extraction described here has received wide currency over the last decade through the series of Message Understanding Conferences [1, 2, 3, 4, 14]. We shall discuss these Conferences in more detail a bit later, and shall use simpli ed versions of extraction tasks from these Conferences as examples throughout this paper. Figure 1 shows a simpli ed example from one of the earlier MUC's, involving terrorist events (MUC-3) [1]. For each terrorist event, the system had to determine the type of attack (bombing, arson, etc.), the date, location, perpetrator (if stated), targets, and e ects on targets. Other examples of extraction tasks are international joint ventures (where the arguments included the partners, the new venture, its product or service, etc.) and executive succession (indicating who was hired or red by which company for which position). Information extraction is a more limited task than \full text understanding". In full text understanding, we aspire to represent in a explicit fashion all the information in a text. In contrast, in information extraction we delimit in advance, as part of the speci cation of the task, the semantic range of the output: the relations we will represent, and the allowable llers in each slot of a relation.
28e0c6088cf444e8694e511148a8f19d9feaeb44
Deployable Helical Antennas for CubeSats
This paper explores the behavior of a self-deploying helical pantograph antenna for CubeSats. The helical pantograph concept is described along with concepts for attachment to the satellite bus. Finite element folding simulations of a pantograph consisting of eight helices are presented and compared to compaction force experiments done on a prototype antenna. Reflection coefficient tests are also presented, demonstrating the operating frequency range of the prototype antenna. The helical pantograph is shown to be a promising alternative to current small satellite antenna solutions.
c3deb9745320563d5060e568aeb294469f6582f6
Knowledge Extraction from Structured Engineering Drawings
As a typical type of structured documents, table drawings are widely used in engineering fields. Knowledge extraction of such structured documents plays an important role in automatic interpretation systems. In this paper, we propose a new knowledge extraction method based on automatically analyzing drawing layout and extracting physical or logical structures from the given engineering table drawings. Then based on the automatic interpretation results, we further propose normalization method to integrate varied types of engineering tables with other engineering drawings and extract implied domain knowledge.
7b7b0b0239072d442e0620a2801c47036ae05251
Public Goods and Ethnic Divisions
We present a model that links heterogeneity of preferences across ethnic groups in a city to the amount and type of public good the city supplies. We test the implications of the model with three related data sets: U. S. cities, U. S. metropolitan areas, and U. S. urban counties. Results show that the shares of spending on productive public goods -education, roads, sewers and trash pickup -in U. S. cities (metro areas/urban counties) are inversely related to the city’s (metro area’s/county’s) ethnic fragmentation, even after controlling for other socioeconomic and demographic determinants. We conclude that ethnic conflict is an important determinant of local public finances.
01dc8e05ad9590b3a1bf2f42226123c7da4b9fd1
Guided Image Filtering
In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.
876ba98cfa676430933057be8ffbe61d5d83335a
Self-organization patterns in wasp and open source communities
In this paper, we conducted a comparative study of how social organization takes place in a wasp colony and OSS developer communities. Both these systems display similar global organization patterns, such as hierarchies and clear labor divisions. As our analysis shows, both systems also define interacting agent networks with similar common features that reflect limited information sharing among agents. As far as we know, this is the first research study analyzing the patterns and functional significance of these systems' weighted-interaction networks. By illuminating the extent to which self-organization is responsible for patterns such as hierarchical structure, we can gain insight into the origins of organization in OSS communities.
5c0edc899359a69c3769da238491f93e7a2f6d6d
Representing Attitude : Euler Angles , Unit Quaternions , and Rotation Vectors
We present the three main mathematical constructs used to represent the attitude of a rigid body in threedimensional space. These are (1) the rotation matrix, (2) a triple of Euler angles, and (3) the unit quaternion. To these we add a fourth, the rotation vector, which has many of the benefits of both Euler angles and quaternions, but neither the singularities of the former, nor the quadratic constraint of the latter. There are several other subsidiary representations, such as Cayley-Klein parameters and the axis-angle representation, whose relations to the three main representations are also described. Our exposition is catered to those who seek a thorough and unified reference on the whole subject; detailed derivations of some results are not presented. Keywords–Euler angles, quaternion, Euler-Rodrigues parameters, Cayley-Klein parameters, rotation matrix, direction cosine matrix, transformation matrix, Cardan angles, Tait-Bryan angles, nautical angles, rotation vector, orientation, attitude, roll, pitch, yaw, bank, heading, spin, nutation, precession, Slerp
cb0618d4566b71ec0ea928d35899745fe36d501d
Evaluation is a Dynamic Process : Moving Beyond Dual System Models
Over the past few decades, dual attitude ⁄ process ⁄ system models have emerged as the dominant framework for understanding a wide range of psychological phenomena. Most of these models characterize the unconscious and conscious mind as being built from discrete processes or systems: one that is reflexive, automatic, fast, affective, associative, and primitive, and a second that is deliberative, controlled, slow, cognitive, propositional, and more uniquely human. Although these models serve as a useful heuristic for characterizing the human mind, recent developments in social and cognitive neuroscience suggest that the human evaluative system, like most of cognition, is widely distributed and highly dynamic. Integrating these advances with current attitude theories, we review how the recently proposed Iterative Reprocessing Model can account for apparent dual systems as well as discrepancies between traditional dual system models and recent research revealing the dynamic nature of evaluation. Furthermore, we describe important implications this dynamical system approach has for various social psychological domains. For nearly a century, psychologists have sought to understand the unconscious and conscious processes that allow people to evaluate their surroundings (Allport, 1935; Freud, 1930). Building on a model of the human mind rooted in classic Greek philosophy (Annas, 2001), many contemporary psychologists have characterized the mind as possessing discrete processes or systems: one that is evolutionarily older, reflexive, automatic, fast, affective, associative, and the other that is more uniquely human, deliberative, controlled, slow, cognitive, and propositional (see Figure 1). These dual process or system models have been highly influential throughout psychology for the past three decades (Chaiken & Trope, 1999). Indeed, a dual system model of the human mind permeates research in a wide range of psychological domains, such as attitudes and persuasion (Chaiken, 1980; Fazio, 1990; Gawronski & Bodenhausen, 2006; Petty & Cacioppo, 1986; Rydell & McConnell, 2006; Wilson, Samuel, & Schooler, 2000), stereotypes and prejudice (Crandall & Eshleman, 2003; Devine, 1989; Gaertner & Dovidio, 1986; Pettigrew & Meertens, 1995), person perception (Brewer, 1988; Fiske & Neuberg, 1990; Macrae & Bodenhausen, 2000), self-regulation (Baumeister & Heatherton, 1996; Freud, 1930; Hofmann, Friese, & Strack, 2009; Strack & Deutsch, 2004), moral cognition (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Haidt, 2001), learning and memory (Smith & DeCoster, 2000; Sun, 2002), and decision-making (Kahneman, 2003; Sloman, 1996). Although dual system models provide generative frameworks for understanding a wide range of psychological phenomenon, recent developments in social and affective neuroscience suggest that the human evaluative system, like most of cognition, is widely distributed and highly dynamic (e.g., Ferguson & Wojnowicz, 2011; Freeman & Ambady, Social and Personality Psychology Compass 6/6 (2012): 438–454, 10.1111/j.1751-9004.2012.00438.x a 2012 Blackwell Publishing Ltd 2011; Scherer, 2009). Integrating these advances with current attitude theories, we review how the recently proposed Iterative Reprocessing Model (Cunningham & Zelazo, 2007; Cunningham, Zelazo, Packer, & Van Bavel, 2007) can account for apparent dual systems as well as discrepancies between traditional dual system models and recent research revealing the dynamic nature of evaluation. The model also address why the nature of evaluative processing differs across people (e.g., Cunningham, Raye, & Johnson, 2005; Park, Van Bavel, Vasey, & Thayer, forthcoming). Although we focus primarily on dual models of attitudes and evaluation due to space constraints, we believe the premises of our dynamic model can be generalized to other domains where dual system models of typically invoked (Chaiken & Trope, 1999), including social cognition, self-regulation, prejudice and stereotyping, and moral cognition. Therefore, we very briefly discuss the implications of our model for these other domains in the final section of this paper and encourage interested readers to read our more extensive treatment of these issues in the domain of stereotypes and prejudice (Cunningham & Van Bavel, 2009a; Van Bavel & Cunningham, 2011) and emotion (Cunningham & Van Bavel, 2009b; Kirkland & Cunningham, 2011, forthcoming). Attitudes and evaluation Attitudes are one of the most central constructs in social psychology, yet there has been considerable debate regarding the most fundamental aspects of attitudes (Fazio, 2007; Schwarz & Bohner, 2001). Allport (1935) defined an attitude as ‘‘a mental and neural state of readiness, organized through experience, exerting a directive or dynamic influence upon the individual’s response to all objects and situations with which it is related’’ (p. 810). Throughout the history of attitude research, theorists and researchers have attempted to provide a complete yet parsimonious definition of this construct. Wellknown examples include the one-component perspective (Thurstone, 1928), the tripartite model (Affective, Behavior, Cognition; Katz & Stotland, 1959; Rosenberg & Hovland, 1960), and more recently, a host of dual attitudes (e.g., Greenwald & Banaji, 1995; Rydell & McConnell, 2006; Wilson et al., 2000) and dual process models (e.g., Chaiken, 1980; Fazio, 1990; Gawronski & Bodenhausen, 2006; Petty & Cacioppo, 1986). It is widely assumed that attitudes are stored associations between objects and their evaluations, which can be accessed from memory very quickly with little conscious effort Figure 1 Illustrative example of the process and content of a dual system model (cited in Kahneman, 2003, p. 698). Evaluation is a Dynamic Process 439 a 2012 Blackwell Publishing Ltd Social and Personality Psychology Compass 6/6 (2012): 438–454, 10.1111/j.1751-9004.2012.00438.x (Fazio, 2001; Fazio, Sanbonmatsu, Powell, & Kardes, 1986; but see Schwarz, 2007). For example, people categorize positive and negative words more quickly when these words are preceded by a similarly valenced stimuli, suggesting that attitudes are automatically activated by the mere presence of the attitude object in the environment (Fazio et al., 1986). Moreover, people may have access to evaluative information about stimuli prior to their semantic content (Bargh, Litt, Pratto, & Spielman, 1989; but see Storbeck & Clore, 2007). Such research has led to the conclusion that the initial evaluative classification of stimuli as good or bad can be activated automatically and guide the perceiver’s interpretation of his or her environment (Houston & Fazio, 1989; Smith, Fazio, & Cejka, 1996). Dual attitudes and dual process models of attitudes The recent development of a wide variety of implicit attitude measures (Petty, Fazio, & Briñol, 2009; Wittenbrink & Schwarz, 2007), including measures of human physiology (Cunningham, Packer, Kesek, & Van Bavel, 2009), has fueled an explosion of research on dual attitude ⁄process ⁄ system models of attitudes and evaluations (see Table 1). Most of these models infer dual process architecture from observable dissociations between implicit and explicit measures of behavior (e.g., Dovidio, Kawakami, & Gaertner, 2002; McConnell & Leibold, 2001; Rydell & McConnell, 2006). Although many dual models generally share a common set of assumptions about the human mind, the specific features of each model differ. Therefore, we propose a rough taxonomy to characterize different classes of these models. ‘‘Dual attitudes models’’ tend to dichotomize the representations of attitudes into distinct automatic versus controlled constructs (Greenwald & Banaji, 1995; Rydell & McConnell, 2006; Wilson et al., 2000). In contrast, ‘‘dual process models’’ tend to dichotomize the processing of attitudinal representations into automatic versus controlled processes. There is considerable debate over whether these two types of processes are independent or communicate with one another (i.e., information from one system is available to the other system) (Fazio, 1990; Gawronski & Bodenhausen, 2006; Gilbert, Pelham, & Krull, 1988; Petty, Brinol, & DeMarree, 2007). In the latter case, interdependent dual process models have generally been proposed to operate in a corrective fashion, such that ‘‘controlled’’ processes can influence otherwise ‘‘automatic’’ responses (e.g., Wegener & Petty, 1997). Although dual attitudes models likely require dual processes to integrate different attitudinal representations into evaluations and behaviors, dual process models are less likely to require the assumption of dual attitude representations (e.g., Fazio, 1990). For the purpose of clarity, we use ‘‘dual system models’’ to capture models that assume dual attitudes and processes that do not interact (e.g., Rydell & McConnell, 2006; Wilson et al., 2000). There are, of course, many ways to hook up a dual system (see Gilbert, 1999 for a discussion). A complete discussion of all possible dual models and interconnections between these systems is beyond the scope of this article. Therefore, we focus on several core premises that many models have in common. Likewise, we focus on the core premises from our own model – rather than an exhaustive discussion (e.g., Cunningham et al., 2007) – in order to communicate key similarities and differences between dual models and our proposed dynamic model. Furthermore, we recognize that dual models and our proposed dynamic model do not exhaust all types of models of attitudes and evaluation – some extant models do include more than two processes (e.g., Beer, Knight, & D’Esposito, 2006; Conrey, Sherman, Gawronski, Hugenberg, & Groom, 2005) and many allow for interactive processes that operate in a post hoc, corrective fashion (e.g., Chen & Chaiken, 1999; Gawronski & 440 Evaluation is a Dynamic Process a 2012 Blackwell Publishing Ltd Social and Personality Psychology Compass 6/6 (2012):
c99d36c9de8685052d2ebc4043510c2dafbbd166
Clues for detecting irony in user-generated contents: oh...!! it's "so easy" ;-)
We investigate the accuracy of a set of surface patterns in identifying ironic sentences in comments submitted by users to an on-line newspaper. The initial focus is on identifying irony in sentences containing positive predicates since these sentences are more exposed to irony, making their true polarity harder to recognize. We show that it is possible to find ironic sentences with relatively high precision (from 45% to 85%) by exploring certain oral or gestural clues in user comments, such as emoticons, onomatopoeic expressions for laughter, heavy punctuation marks, quotation marks and positive interjections. We also demonstrate that clues based on deeper linguistic information are relatively inefficient in capturing irony in user-generated content, which points to the need for exploring additional types of oral clues.
c27af6bc9d274a4e95cddb5e5ed61dee05c1ed76
Novel compact spider microstrip antenna with new Defected Ground Structure
Two novel Defected Ground Structures (DGS) were first proposed, which have better results than that of the dumbbell (published shape). Using the general model of DGS, its equivalent parameters were extracted. The two new proposed shapes of DGS were then used to design a novel compact spider microstrip antenna to minimize its area. The size of the developed antenna was reduced to about 90.5% of that of the conventional one. This antenna with two different novel shapes of DGS was designed and simulated by using the ready-made software package Zeland-IE3D. Finally, it was fabricated by using thin film and photolithographic technique and measured by using vector network analyzer. Good agreements were found between the simulated and measured results.
29968249a86aaa258aae95a9781d4d025b3c7658
Fraud Detection From Taxis' Driving Behaviors
Taxi is a major transportation in the urban area, offering great benefits and convenience to our daily life. However, one of the major business fraud in taxis is the charging fraud, specifically overcharging for the actual distance. In practice, it is hard for us to always monitor taxis and detect such fraud. Due to the Global Positioning System (GPS) embedded in taxis, we can collect the GPS reports from the taxis' locations, and thus, it is possible for us to retrieve their traces. Intuitively, we can utilize such information to construct taxis' trajectories, compute the actual service distance on the city map, and detect fraudulent behaviors. However, in practice, due to the extremely limited reports, notable location errors, complex city map, and road networks, our task to detect taxi fraud faces significant challenges, and the previous methods cannot work well. In this paper, we have a critical and interesting observation that fraudulent taxis always play a secret trick, i.e., modifying the taximeter to a smaller scale. As a result, it not only makes the service distance larger but also makes the reported taxi speed larger. Fortunately, the speed information collected from the GPS reports is accurate. Hence, we utilize the speed information to design a system, which is called the Speed-based Fraud Detection System (SFDS), to model taxi behaviors and detect taxi fraud. Our method is robust to the location errors and independent of the map information and road networks. At the same time, the experiments on real-life data sets confirm that our method has better accuracy, scalability, and more efficient computation, compared with the previous related methods. Finally, interesting findings of our work and discussions on potential issues are provided in this paper for future city transportation and human behavior research.
db2e6623c8c0f42e29baf066f4499015c8397dae
Implementation of Running Average Background Subtraction Algorithm in FPGA for Image Processing Applications
In this paper a new background subtraction algorithm was developed to detect moving objects from a stable system in which visual surveillance plays a major role. Initially it was implemented in MATLAB. Among all existing algorithms running average algorithm was choosen because of low computational complexity which is the major parameter of time in VLSI. The concept of the background subtraction is to subtract the current image with respect to the reference image and compare it with to the certain threshold values. We propose a new real time background subtraction algorithm which was implemented with verilog hdl in order to detect moving objects accurately. Our method involves three important modules background modelling; adaptive threshold estimation and finally fore ground extraction. Compared to all existing algorithms our method having low power consumption and low resource utilization. Here we have written the
15684058d73560590931596da9208804c2e14884
Analyzing Neighborhoods of Falsifying Traces in Cyber-Physical Systems
We study the problem of analyzing falsifying traces of cyber-physical systems. Specifically, given a system model and an input which is a counterexample to a property of interest, we wish to understand which parts of the inputs are "responsible" for the counterexample as a whole. Whereas this problem is well known to be hard to solve precisely, we provide an approach based on learning from repeated simulations of the system under test. Our approach generalizes the classic concept of "one-at-a-time" sensitivity analysis used in the risk and decision analysis community to understand how inputs to a system influence a property in question. Specifically, we pose the problem as one of finding a neighborhood of inputs that contains the falsifying counterexample in question, such that each point in this neighborhood corresponds to a falsifying input with a high probability. We use ideas from statistical hypothesis testing to infer and validate such neighborhoods from repeated simulations of the system under test. This approach not only helps to understand the sensitivity of these counterexamples to various parts of the inputs, but also generalizes or widens the given counterexample by returning a neighborhood of counterexamples around it. We demonstrate our approach on a series of increasingly complex examples from automotive and closed loop medical device domains. We also compare our approach against related techniques based on regression and machine learning.
6d41c259fe6588681194f0ba22d26a11ebd5ce3d
Adhesive capsulitis: sonographic changes in the rotator cuff interval with arthroscopic correlation
To evaluate the sonographic findings of the rotator interval in patients with clinical evidence of adhesive capsulitis immediately prior to arthroscopy. We prospectively compared 30 patients with clinically diagnosed adhesive capsulitis (20 females, 10 males, mean age 50 years) with a control population of 10 normal volunteers and 100 patients with a clinical suspicion of rotator cuff tears. Grey-scale and colour Doppler sonography of the rotator interval were used. Twenty-six patients (87%) demonstrated hypoechoic echotexture and increased vascularity within the rotator interval, all of whom had had symptoms for less than 1 year. Three patients had hypoechoic echotexture but no increase in vascularity, and one patient had a normal sonographic appearance. All patients were shown to have fibrovascular inflammatory soft-tissue changes in the rotator interval at arthroscopy commensurate with adhesive capsulitis. None of the volunteers or the patients with a clinical diagnosis of rotator cuff tear showed such changes. Sonography can provide an early accurate diagnosis of adhesive capsulitis by assessing the rotator interval for hypoechoic vascular soft tissue.
14c4e4b83f936184875ba79e6df1ac10ec556bdd
Unsupervised learning of models for object recognition
A method is presented to learn object class models from unlab eled and unsegmented cluttered scenes for the purpose of visual object recognition. The variabili ty across a class of objects is modeled in a principled way, treating objects as flexible constellatio ns f rigid parts (features). Variability is represented by a joint probability density function (pdf) o n the shape of the constellation and the output of part detectors. Corresponding “constellation mo dels” can be learned in a completely unsupervised fashion. In a first stage, the learning method a utomatically identifies distinctive parts in the training set by applying a clustering algorithm to pat terns selected by an interest operator. It then learns the statistical shape model using expectation m aximization. Mixtures of constellation models can be defined and applied to “discover” object catego ri s in an unsupervised manner. The method achieves very good classification results on human fa ces, cars, leaves, handwritten letters, and cartoon characters.
88a4f9ab70cdb9aea823d76e54d08c28a17ee501
Latent factor models with additive and hierarchically-smoothed user preferences
Items in recommender systems are usually associated with annotated attributes: for e.g., brand and price for products; agency for news articles, etc. Such attributes are highly informative and must be exploited for accurate recommendation. While learning a user preference model over these attributes can result in an interpretable recommender system and can hands the cold start problem, it suffers from two major drawbacks: data sparsity and the inability to model random effects. On the other hand, latent-factor collaborative filtering models have shown great promise in recommender systems; however, its performance on rare items is poor. In this paper we propose a novel model LFUM, which provides the advantages of both of the above models. We learn user preferences (over the attributes) using a personalized Bayesian hierarchical model that uses a combination(additive model) of a globally learned preference model along with user-specific preferences. To combat data-sparsity, we smooth these preferences over the item-taxonomy using an efficient forward-filtering and backward-smoothing inference algorithm. Our inference algorithms can handle both discrete attributes (e.g., item brands) and continuous attributes (e.g., item prices). We combine the user preferences with the latent-factor models and train the resulting collaborative filtering system end-to-end using the successful BPR ranking algorithm. In our extensive experimental analysis, we show that our proposed model outperforms several commonly used baselines and we carry out an ablation study showing the benefits of each component of our model.
e5b8caf98c9746525c0b34bd89bf35f431040920
Finger Vein Recognition With Anatomy Structure Analysis
Finger vein recognition has received a lot of attention recently and is viewed as a promising biometric trait. In related methods, vein pattern-based methods explore intrinsic finger vein recognition, but their performance remains unsatisfactory owing to defective vein networks and weak matching. One important reason may be the neglect of deep analysis of the vein anatomy structure. By comprehensively exploring the anatomy structure and imaging characteristic of vein patterns, this paper proposes a novel finger vein recognition framework, including an anatomy structure analysis-based vein extraction algorithm and an integration matching strategy. Specifically, the vein pattern is extracted from the orientation map-guided curvature based on the valley- or half valley-shaped cross-sectional profile. In addition, the extracted vein pattern is further thinned and refined to obtain a reliable vein network. In addition to the vein network, the relatively clear vein branches in the image are mined from the vein pattern, referred to as the vein backbone. In matching, the vein backbone is used in vein network calibration to overcome finger displacements. The similarity of two calibrated vein networks is measured by the proposed elastic matching and further recomputed by integrating the overlap degree of corresponding vein backbones. Extensive experiments on two public finger vein databases verify the effectiveness of the proposed framework.
1e7efea26cfbbcd2905d63451e77a02f1031ea12
A Novel Global Path Planning Method for Mobile Robots Based on Teaching-Learning-Based Optimization
The Teaching-Learning-Based Optimization (TLBO) algorithm has been proposed in recent years. It is a new swarm intelligence optimization algorithm simulating the teaching-learning phenomenon of a classroom. In this paper, a novel global path planning method for mobile robots is presented, which is based on an improved TLBO algorithm called Nonlinear Inertia Weighted Teaching-Learning-Based Optimization (NIWTLBO) algorithm in our previous work. Firstly, the NIWTLBO algorithm is introduced. Then, a new map model of the path between start-point and goal-point is built by coordinate system transformation. Lastly, utilizing the NIWTLBO algorithm, the objective function of the path is optimized; thus, a global optimal path is obtained. The simulation experiment results show that the proposed method has a faster convergence rate and higher accuracy in searching for the path than the basic TLBO and some other algorithms as well, and it can effectively solve the optimization problem for mobile robot global path planning.
a86fed94c9d97e052d0ff84b2403b10200280c6b
Large Scale Distributed Data Science from scratch using Apache Spark 2.0
Apache Spark is an open-source cluster computing framework. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce’s linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala, SQL and R (MapReduce has 2 core calls) , and its core data abstraction, the distributed data frame. In addition, it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. With massive amounts of computational power, deep learning has been shown to produce state-of-the-art results on various tasks in different fields like computer vision, automatic speech recognition, natural language processing and online advertising targeting. Thanks to the open-source frameworks, e.g. Torch, Theano, Caffe, MxNet, Keras and TensorFlow, we can build deep learning model in a much easier way. Among all these framework, TensorFlow is probably the most popular open source deep learning library. TensorFlow 1.0 was released recently, which provide a more stable, flexible and powerful computation tool for numerical computation using data flow graphs. Keras is a highlevel neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. This tutorial will provide an accessible introduction to large-scale distributed machine learning and data mining, and to Spark and its potential to revolutionize academic and commercial data science practices. It is divided into three parts: the first part will cover fundamental Spark concepts, including Spark Core, functional programming ala map-reduce, data frames, the Spark Shell, Spark Streaming, Spark SQL, MLlib, and more; the second part will focus on hands-on algorithmic design and development with Spark (developing algorithms from scratch such as decision tree learning, association rule mining (aPriori), graph processing algorithms such as pagerank/shortest path, gradient descent algorithms such as support vectors machines and matrix factorization. Industrial applications and deployments of Spark will also be presented.; the third part will introduce deep learning concepts, how to implement a deep learning model through TensorFlow, Keras and run the model on Spark. Example code will be made available in python (pySpark) notebooks.
070874b011f8eb2b18c8aa521ad0a7a932b4d9ad
Action Recognition with Improved Trajectories
Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.
1aad2da473888cb7ebc1bfaa15bfa0f1502ce005
First-Person Activity Recognition: What Are They Doing to Me?
This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning/recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably.
659fc2a483a97dafb8fb110d08369652bbb759f9
Improving the Fisher Kernel for Large-Scale Image Classification
The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.
014e1186209e4f942f3b5ba29b6b039c8e99ad88
Social interactions: A first-person perspective
This paper presents a method for the detection and recognition of social interactions in a day-long first-person video of u social event, like a trip to an amusement park. The location and orientation of faces are estimated and used to compute the line of sight for each face. The context provided by all the faces in a frame is used to convert the lines of sight into locations in space to which individuals attend. Further, individuals are assigned roles based on their patterns of attention. The rotes and locations of individuals are analyzed over time to detect and recognize the types of social interactions. In addition to patterns of face locations and attention, the head movements of the first-person can provide additional useful cues as to their attentional focus. We demonstrate encouraging results on detection and recognition of social interactions in first-person videos captured from multiple days of experience in amusement parks.
d0f4eb19708c0261aa07519279792e19b793b863
Real-Time EMG Driven Lower Limb Actuated Orthosis for Assistance As Needed Movement Strategy
This paper presents a new approach to control a wearable knee joint exoskeleton driven through the wearer’ s intention. A realistic bio-inspired musculoskeletal knee joint model is used to control the exoskeleton. This model takes in to account changes in muscle length and joint moment arms as well as the dynamics of muscle activation and muscle contrac tion during lower limb movements. Identification of the model parameters is done through an unconstrained optimization problem formulation. A control law strategy based on the principle of assistance as needed is proposed. This approach guarantees asymptotic stability of the knee joint orthosis and adaptat ion to human-orthosis interaction. Moreover, the proposed contr ol law is robust with respect to external disturbances. Experimental validations are conducted online on a healthy subject during flexion and extenion of their knee joint. The proposed control strategy has shown satisfactory performances in terms of tr acking trajectory and adaptation to human tasks completion.
cf4ed483e83e32e4973fdaf25377f867723c19b0
Coronal positioning of existing gingiva: short term results in the treatment of shallow marginal tissue recession.
Although not a new procedure, coronal positioning of existing gingiva may be used to enhance esthetics and reduce sensitivity. Unfortunately when recession is minimal and the marginal tissue is healthy, many periodontists do not suggest treatment. This article outlines a simple surgical technique with the criteria for its use which results in a high degree of predictability and patient satisfaction.
46cfc1870b1aa0876e213fb08dc23f09420fcca6
Effective Dynamic Voltage Scaling Through CPU-Boundedness Detection
Dynamic voltage scaling (DVS) allows a program to execute at a non-peak CPU frequency in order to reduce CPU power, and hence, energy consumption; however, it is done at the cost of performance degradation. For a program whose execution time is bounded by peripherals’ performance rather than the CPU speed, applying DVS to the program will result in negligible performance penalty. Unfortunately, existing DVS-based power-management algorithms are conservative in the sense that they overly exaggerate the impact that the CPU speed has on the execution time, e.g., they assume that the execution time will double if the CPU speed is halved. Based on a new single-coefficient performance model, we propose a DVS algorithm that detects the CPU-boundedness of a program on the fly (via a regression method on the past MIPS rate) and then adjusts the CPU frequency accordingly. To illustrate its effectiveness, we compare our algorithm with other DVS algorithms on real systems via physical measurements.
6761bca93f1631f93967097c9f0859d3c7cdc233
The Relationships among Acceleration , Agility , Sprinting Ability , Speed Dribbling Ability and Vertical Jump Ability in 14-Year-Old Soccer Players
The aim of this study was to evaluate the relationships among acceleration, agility, sprinting ability, speed dribbling ability and vertical jump ability in 14-year-old soccer players. Twenty-five young soccer players (average age 13.52 ± 0.51 years; height 158.81 ± 5.76 cm; weight 48.92 ± 6.48 kg; training age 3.72 ± 0.64 years) performed a series of physical tests: Yo-Yo Intermittent Recovery Test Level 1 (YYIRT); zigzag agility with the ball (ZAWB); without the ball (ZAWOB); sprinting ability (10-m, 20-m and 30-m); speed dribbling ability (SDA), acceleration ability (FLST) and jumping ability (counter-movement jump (CMJ), squat jump (SJ) and drop jump (DJ). The results showed that 10-m sprint was correlated with 20-m (r = 0.682), 30-m sprint (r = 0.634) and also SDA, FLST and ZAWOB (r = 0.540, r = 0.421; r = 0.533 respectively). Similarly, 20-m sprint was correlated with 30-m sprint (r = 0.491) and ZAWOB (r = 0.631). 30-m sprint was negatively correlated with CMJ (r = -0.435), while strong and moderate correlated with FLST (r = 0.742), ZAWOB (r = 0.657). In addition, CMJ strong correlated with SJ and DJ (r = 0.779, r = 0.824 respectively). CMJ, SJ and DJ were negatively correlated with ZAWOB. Furthermore, SDA strong and moderate correlated with FLST (r = 0.875), ZAWB (r = 0.718) and ZAWOB (r = 0.645). Similarly, a correlation was also found among FLST, ZAWB and ZAWOB (r = 0.421, r = 0.614 respectively). Finally, ZAWOB was strong and moderate correlated with the performance of soccer players in different field tests. In conclusion, the findings of the present study indicated that agility without the ball associated with the all physical fitness components. In addition, agility performance affects acceleration, sprint and jumping performances in young soccer players. Therefore, soccer players should focus on agility exercises in order to improve their acceleration, sprinting and jumping performance.
4f72e1cc3fd59bd177a9ad4a6045cb863b977f6e
Vehicle Applications of Controller Area Network
The Controller Area Network (CAN) is a serial bus communications protocol developed by Bosch in the early 1980s. It defines a standard for efficient and reliable communication between sensor, actuator, controller, and other nodes in real-time applications. CAN is the de facto standard in a large variety of networked embedded control systems. The early CAN development was mainly supported by the vehicle industry: CAN is found in a variety of passenger cars, trucks, boats, spacecraft, and other types of vehicles. The protocol is also widely used today in industrial automation and other areas of networked embedded control, with applications in diverse products such as production machinery, medical equipment, building automation, weaving machines, and wheelchairs. In the automotive industry, embedded control has grown from stand-alone systems to highly integrated and networked control systems [11, 7]. By networking electro-mechanical subsystems, it becomes possible to modularize functionalities and hardware, which facilitates reuse and adds capabilities. Fig. 1 shows an example of an electronic control unit (ECU) mounted on a diesel engine of a Scania truck. The ECU handles the control of engine, turbo, fan, etc. but also the CAN communication. Combining networks and mechatronic modules makes it possible to reduce both the cabling and the number
ce5a5ef51ef1b470c00a7aad533d5e8ad2ef9363
A Universal Intelligent System-on-Chip Based Sensor Interface
The need for real-time/reliable/low-maintenance distributed monitoring systems, e.g., wireless sensor networks, has been becoming more and more evident in many applications in the environmental, agro-alimentary, medical, and industrial fields. The growing interest in technologies related to sensors is an important indicator of these new needs. The design and the realization of complex and/or distributed monitoring systems is often difficult due to the multitude of different electronic interfaces presented by the sensors available on the market. To address these issues the authors propose the concept of a Universal Intelligent Sensor Interface (UISI), a new low-cost system based on a single commercial chip able to convert a generic transducer into an intelligent sensor with multiple standardized interfaces. The device presented offers a flexible analog and/or digital front-end, able to interface different transducer typologies (such as conditioned, unconditioned, resistive, current output, capacitive and digital transducers). The device also provides enhanced processing and storage capabilities, as well as a configurable multi-standard output interface (including plug-and-play interface based on IEEE 1451.3). In this work the general concept of UISI and the design of reconfigurable hardware are presented, together with experimental test results validating the proposed device.
830aa3e8a0cd17d77c96695373469ba2af23af38
Efficient Nonsmooth Nonconvex Optimization for Image Restoration and Segmentation
In this article, we introduce variational image restoration and segmentation models that incorporate the L1 data-fidelity measure and a nonsmooth, nonconvex regularizer. The L1 fidelity term allows us to restore or segment an image with low contrast or outliers, and the nonconvex regularizer enables homogeneous regions of the objective function (a restored image or an indicator function of a segmented region) to be efficiently smoothed while edges are well preserved. To handle the nonconvexity of the regularizer, a multistage convex relaxation method is adopted. This provides a better solution than the classical convex total variation regularizer, or than the standard L1 convex relaxation method. Furthermore, we design fast and efficient optimization algorithms that can handle the non-differentiability of both the fidelity and regularization terms. The proposed iterative algorithms asymptotically solve the original nonconvex problems. Our algorithms output a restored image or segmented regions in the image, as well as an edge indicator that characterizes the edges of the output, similar to Mumford–Shah-like regularizing functionals. Numerical examples demonstrate the promising results of the proposed restoration and segmentation models.
b32895c83296cff134e98e431e6c65d2d6ae9bcf
Hybrid models based on rough set classifiers for setting credit rating decision rules in the global banking industry
Banks are important to national, and even global, economic stability. Banking panics that follow bank insolvency or bankruptcy, especially of large banks, can severely jeopardize economic stability. Therefore, issuers and investors urgently need a credit rating indicator to help identify the financial status and operational competence of banks. A credit rating provides financial entities with an assessment of credit worthiness, investment risk, and default probability. Although numerous models have been proposed to solve credit rating problems, they have the following drawbacks: (1) lack of explanatory power; (2) reliance on the restrictive assumptions of statistical techniques; and (3) numerous variables, which result in multiple dimensions and complex data. To overcome these shortcomings, this work applies two hybrid models that solve the practical problems in credit rating classification. For model verification, this work uses an experimental dataset collected from the Bankscope database for the period 1998–2007. Experimental results demonstrate that the proposed hybrid models for credit rating classification outperform the listing models in this work. A set of decision rules for classifying credit ratings is extracted. Finally, study findings and managerial implications are provided for academics and practitioners. 2012 Elsevier B.V. All rights reserved.
ad49a96e17d8e1360477bd4649ba8d83173b1c3a
A 5.4-Gbit/s Adaptive Continuous-Time Linear Equalizer Using Asynchronous Undersampling Histograms
We demonstrate a new type of adaptive continuous-time linear equalizer (CTLE) based on asynchronous undersampling histograms. Our CTLE automatically selects the optimal equalizing filter coefficient among several predetermined values by searching for the coefficient that produces the largest peak value in histograms obtained with asynchronous undersampling. This scheme is simple and robust and does not require clock synchronization for its operation. A prototype chip realized in 0.13-μm CMOS technology successfully achieves equalization for 5.4-Gbit/s 231 - 1 pseudorandom bit sequence data through 40-, 80-, and 120-cm PCB traces and 3-m DisplayPort cable. In addition, we present the results of statistical analysis with which we verify the reliability of our scheme for various sample sizes. The results of this analysis are confirmed with experimental data.
b620ce13b5dfaabc8b837b569a2265ff8c0e4e71
Social media and internet-based data in global systems for public health surveillance: a systematic review.
CONTEXT The exchange of health information on the Internet has been heralded as an opportunity to improve public health surveillance. In a field that has traditionally relied on an established system of mandatory and voluntary reporting of known infectious diseases by doctors and laboratories to governmental agencies, innovations in social media and so-called user-generated information could lead to faster recognition of cases of infectious disease. More direct access to such data could enable surveillance epidemiologists to detect potential public health threats such as rare, new diseases or early-level warnings for epidemics. But how useful are data from social media and the Internet, and what is the potential to enhance surveillance? The challenges of using these emerging surveillance systems for infectious disease epidemiology, including the specific resources needed, technical requirements, and acceptability to public health practitioners and policymakers, have wide-reaching implications for public health surveillance in the 21st century. METHODS This article divides public health surveillance into indicator-based surveillance and event-based surveillance and provides an overview of each. We did an exhaustive review of published articles indexed in the databases PubMed, Scopus, and Scirus between 1990 and 2011 covering contemporary event-based systems for infectious disease surveillance. FINDINGS Our literature review uncovered no event-based surveillance systems currently used in national surveillance programs. While much has been done to develop event-based surveillance, the existing systems have limitations. Accordingly, there is a need for further development of automated technologies that monitor health-related information on the Internet, especially to handle large amounts of data and to prevent information overload. The dissemination to health authorities of new information about health events is not always efficient and could be improved. No comprehensive evaluations show whether event-based surveillance systems have been integrated into actual epidemiological work during real-time health events. CONCLUSIONS The acceptability of data from the Internet and social media as a regular part of public health surveillance programs varies and is related to a circular challenge: the willingness to integrate is rooted in a lack of effectiveness studies, yet such effectiveness can be proved only through a structured evaluation of integrated systems. Issues related to changing technical and social paradigms in both individual perceptions of and interactions with personal health data, as well as social media and other data from the Internet, must be further addressed before such information can be integrated into official surveillance systems.
af0f4459cd6a41a582e309a461a4cfa4846edefd
Functional cultures and health benefits
A number of health benefits have been claimed for probiotic bacteria such as Lactobacillus acidophilus, Bifidobacterium spp., and L. casei. These benefits include antimutagenic effects, anticarcinogenic properties, improvement in lactose metabolism, reduction in serum cholesterol, and immune system stimulation. Because of the potential health benefits, these organisms are increasingly being incorporated into dairy foods, particularly yoghurt. In addition to yoghurt, fermented functional foods with health benefits based on bioactive peptides released by probiotic organisms, including Evolus and Calpis, have been introduced in the market. To maximize effectiveness of bifidus products, prebiotics are used in probiotic foods. Synbiotics are products that contain both prebiotics and probiotics. r 2007 Elsevier Ltd. All rights reserved.
139ded7450fc0f838f8784053f656114cbdb9a0d
Good Question! Statistical Ranking for Question Generation
We address the challenge of automatically generating questions from reading materials for educational practice and assessment. Our approach is to overgenerate questions, then rank them. We use manually written rules to perform a sequence of general purpose syntactic transformations (e.g., subject-auxiliary inversion) to turn declarative sentences into questions. These questions are then ranked by a logistic regression model trained on a small, tailored dataset consisting of labeled output from our system. Experimental results show that ranking nearly doubles the percentage of questions rated as acceptable by annotators, from 27% of all questions to 52% of the top ranked 20% of questions.
1e8233a8c8271c3278f1b84bed368145c0034a35
Maximizing Throughput of Overprovisioned HPC Data Centers Under a Strict Power Budget
Building future generation supercomputers while constraining their power consumption is one of the biggest challenges faced by the HPC community. For example, US Department of Energy has set a goal of 20 MW for an exascale (1018 flops) supercomputer. To realize this goal, a lot of research is being done to revolutionize hardware design to build power efficient computers and network interconnects. In this work, we propose a software-based online resource management system that leverages hardware facilitated capability to constrain the power consumption of each node in order to optimally allocate power and nodes to a job. Our scheme uses this hardware capability in conjunction with an adaptive runtime system that can dynamically change the resource configuration of a running job allowing our resource manager to re-optimize allocation decisions to running jobs as new jobs arrive, or a running job terminates. We also propose a performance modeling scheme that estimates the essential power characteristics of a job at any scale. The proposed online resource manager uses these performance characteristics for making scheduling and resource allocation decisions that maximize the job throughput of the supercomputer under a given power budget. We demonstrate the benefits of our approach by using a mix of jobs with different power-response characteristics. We show that with a power budget of 4.75 MW, we can obtain up to 5.2X improvement in job throughput when compared with the SLURM scheduling policy that is power-unaware. We corroborate our results with real experiments on a relatively small scale cluster, in which we obtain a 1.7X improvement.
6129378ecc501b88f4fbfe3c0dfc20d09764c5ee
The Impact of Flow on Online Consumer Behavior
Previous research has acknowledged flow as a useful construct for explaining online consumer behavior. However, there is dearth of knowledge about what dimensions of flow and how they actually influence online consumer behavior as flow is difficult to conceptualize and measure. This research examines flow and its effects on online consumer behavior in a unified model which draws upon theory of planned behavior (TPB). The four important dimensions of flow (concentration, enjoyment, time distortion, telepresence) are explored in terms of their antecedent effects on online consumer behavior. Results of this empirical study show that flow influences online consumer behavior through several important latent constructs. Findings of this research not only extend the existing knowledge of flow and its antecedent effects on online consumer behavior but also provide new insights into how flow can be conceptualized and studied in the e-commerce setting.
9562f19f1b6e6bfaeb02f39dc12e6fc262938543
DuDe: The Duplicate Detection Toolkit
Duplicate detection, also known as entity matching or record linkage, was first defined by Newcombe et al. [19] and has been a research topic for several decades. The challenge is to effectively and efficiently identify pairs of records that represent the same real world entity. Researchers have developed and described a variety of methods to measure the similarity of records and/or to reduce the number of required comparisons. Comparing these methods to each other is essential to assess their quality and efficiency. However, it is still difficult to compare results, as there usually are differences in the evaluated datasets, the similarity measures, the implementation of the algorithms, or simply the hardware on which the code is executed. To face this challenge, we are developing the comprehensive duplicate detection toolkit “DuDe”. DuDe already provides multiple methods and datasets for duplicate detection and consists of several components with clear interfaces that can be easily served with individual code. In this paper, we present the DuDe architecture and its workflow for duplicate detection. We show that DuDe allows to easily compare different algorithms and similarity measures, which is an important step towards a duplicate detection benchmark. 1. DUPLICATE DETECTION FRAMEWORKS The basic problem of duplicate detection has been studied under various names, such as entity matching, record linkage, merge/purge or record reconciliation. Given a set of entities, the goal is to identify the represented set of distinct real-world entities. Proposed algorithms in the area of duplicate detection aim to improve the efficiency or the effectiveness of the duplicate detection process. The goal of efficiency is usually to reduce the number of pairwise comparisons. In a naive approach this is quadratic in the number of records. By making intelligent guesses which records have a high probability of representing the same real-world entity, the search space is reduced with the drawback that some duPermission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, to post on servers or to redistribute to lists, requires a fee and/or special permission from the publisher, ACM. VLDB ‘10, September 13-17, 2010, Singapore Copyright 2010 VLDB Endowment, ACM 000-0-00000-000-0/00/00. plicates might be missed. Effectiveness, on the other hand, aims at classifying pairs of records accurately as duplicate or non-duplicate [17]. Elmagarmid et al. have compiled a survey of existing algorithms and techniques for duplicate detection [11]. Köpcke and Rahm give a comprehensive overview about existing duplicate detection frameworks [15]. They compare eleven frameworks and distinguish between frameworks without training (BN [16], MOMA [24], SERF [1]), training-based frameworks (Active Atlas [22], [23], MARLIN [2, 3], Multiple Classifier System [27], Operator Trees [4]) and hybrid frameworks (TAILOR [10], FEBRL [6], STEM [14], Context Based Framework [5]). Not included in the overview is STRINGER [12], which deals with approximate string matching in large data sources. Köpcke and Rahm use several comparison criteria, such as supported entity types (e.g. relational entities, XML), availability of partitioning methods to reduce the search space, used matchers to determine whether two entities are similar enough to represent the same real-world entity, the ability to combine several matchers, and, where necessary, the selection of training data. In their summary, Köpcke and Rahm criticize that the frameworks use diverse methodologies, measures, and test problems for evaluation and that it is therefore difficult to assess the efficiency and effectiveness of each single system. They argue that standardized entity matching benchmarks are needed and that researchers should provide prototype implementations and test data with their algorithms. This agrees with Neiling et al. [18], where desired properties of a test framework for object identification solutions are discussed. Moreover, Weis et al. [25] argue for a duplicate detection benchmark. Both papers see the necessity for standardized data from real-world or artificial datasets, which must also contain information about the real-world pairs. Additionally, clearly defined quality criteria with a description of their computation, and a detailed specification of the test procedure are required. An overview of quality and complexity measures for data linkage and deduplication can be found in Christen and Goiser [7] With DuDe, we provide a toolkit for duplicate detection that can easily be extended by new algorithms and components. Conducted experiments are comprehensible and can be compared with former ones. Additionally, several algorithms, similarity measures, and datasets with gold standards are provided, which is a requirement for a duplicate detection benchmark. DuDe and several datasets are available for download at http://www.hpi.uni-potsdam. de/naumann/projekte/dude.html.
c71217b2b111a51a31cf1107c71d250348d1ff68
One Network to Solve Them All — Solving Linear Inverse Problems Using Deep Projection Models
While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each inverse problem requires its own dedicated network. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and expensive to use these problem-specific networks. On the other hand, traditional methods using analytic signal priors can be used to solve any linear inverse problem; this often comes with a performance that is worse than learning-based methods. In this work, we provide a middle ground between the two kinds of methods — we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. We achieve this by training a network that acts as a quasi-projection operator for the set of natural images and show that any linear inverse problem involving natural images can be solved using iterative methods. We empirically show that the proposed framework demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting.
df3a7c3b90190a91ca645894da25c56ebe13b6e6
Automatic Deobfuscation and Reverse Engineering of Obfuscated Code
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 CHAPTER
14ad0bb3167974a17bc23c94e2a8644e93e57d76
Static test case prioritization using topic models
Software development teams use test suites to test changes to their source code. In many situations, the test suites are so large that executing every test for every source code change is infeasible, due to time and resource constraints. Development teams need to prioritize their test suite so that as many distinct faults as possible are detected early in the execution of the test suite. We consider the problem of static black-box test case prioritization (TCP), where test suites are prioritized without the availability of the source code of the system under test (SUT). We propose a new static black-box TCP technique that represents test cases using a previously unused data source in the test suite: the linguistic data of the test cases, i.e., their identifier names, comments, and string literals. Our technique applies a text analysis algorithm called topic modeling to the linguistic data to approximate the functionality of each test case, allowing our technique to give high priority to test cases that test different functionalities of the SUT. We compare our proposed technique with existing static black-box TCP techniques in a case study of multiple real-world open source systems: several versions of Apache Ant and Apache Derby. We find that our static black-box TCP technique outperforms existing static black-box TCP techniques, and has comparable or better performance than two existing execution-based TCP techniques. Static black-box TCP methods are widely applicable because the only input they require is the source code of the test cases themselves. This contrasts with other TCP techniques which require access to the SUT runtime behavior, to the SUT specification models, or to the SUT source code.
04e54373e680e507908b77b641d57da80ecf77a2
A survey of current paradigms in machine translation
This paper is a survey of the current machine translation research in the US, Europe and Japan. A short history of machine translation is presented rst, followed by an overview of the current research work. Representative examples of a wide range of diierent approaches adopted by machine translation researchers are presented. These are described in detail along with a discussion of the practicalities of scaling up these approaches for operational environments. In support of this discussion, issues in, and techniques for, evaluating machine translation systems are addressed.
bbceefc86d8744c9d929a0930721f8415df97b11
On-line monitoring of power curves
A data-driven approach to the performance analysis of wind turbines is presented. Turbine performance is captured with a power curve. The power curves are constructed using historical wind turbine data. Three power curve models are developed, one by the least squares method and the other by the maximum likelihood estimation method. The models are solved by an evolutionary strategy algorithm. The power curve model constructed by the least squares method outperforms the one built by the maximum likelihood approach. The third model is non-parametric and is built with the k-nearest neighbor (k-NN) algorithm. The least squares (parametric) model and the non-parametric model are used for on-line monitoring of the power curve and their performance is analyzed. 2008 Elsevier Ltd. All rights reserved.
e0306476d4c3021cbda7189ee86390d81a6d7e36
The infrared camera-based system to evaluate the human sleepiness
The eye's blinking is a significant indicator of the sleepiness. The existing systems of blink detection and sleepiness analysis require usually to fix camera on spectacle frame or a special helmet that is not convenient and can affect the obtained result. In this paper, the infrared camera-based contact-less system is proposed to evaluate the human's sleepiness. The infrared light switching is used to detect the pupil in each frame and, as a result, the blink event. The active pan-tilt unit is used to make possible free head movements. The algorithm is pointed out to process the camera frames in order to distinguish the involuntary blinks from voluntary ones. Preliminary experimental tests are shown with the intent to validate the proposed hardware and software system pointed out.
15340aab3a8b8104117f5462788c945194bce782
Context-Independent Claim Detection for Argument Mining
Argumentation mining aims to automatically identify structured argument data from unstructured natural language text. This challenging, multifaceted task is recently gaining a growing attention, especially due to its many potential applications. One particularly important aspect of argumentation mining is claim identification. Most of the current approaches are engineered to address specific domains. However, argumentative sentences are often characterized by common rhetorical structures, independently of the domain. We thus propose a method that exploits structured parsing information to detect claims without resorting to contextual information, and yet achieve a performance comparable to that of state-of-the-art methods that heavily rely on the context.
49e1066b2e61c6ef5b935355cd9b8a0283963288
Identifying Appropriate Support for Propositions in Online User Comments
The ability to analyze the adequacy of supporting information is necessary for determining the strength of an argument.1 This is especially the case for online user comments, which often consist of arguments lacking proper substantiation and reasoning. Thus, we develop a framework for automatically classifying each proposition as UNVERIFIABLE, VERIFIABLE NONEXPERIENTIAL, or VERIFIABLE EXPERIENTIAL2, where the appropriate type of support is reason, evidence, and optional evidence, respectively3. Once the existing support for propositions are identified, this classification can provide an estimate of how adequately the arguments have been supported. We build a goldstandard dataset of 9,476 sentences and clauses from 1,047 comments submitted to an eRulemaking platform and find that Support Vector Machine (SVM) classifiers trained with n-grams and additional features capturing the verifiability and experientiality exhibit statistically significant improvement over the unigram baseline, achieving a macro-averaged F1 of 68.99%.
652a0ac5aea769387ead37225829e7dfea562bdc
Why do humans reason? Arguments for an argumentative theory.
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.
97876c2195ad9c7a4be010d5cb4ba6af3547421c
Report on a general problem-solving program.
436094815ec69b668b0895ec4f301c1fd63a8ce6
Effect of Speech Recognition Errors on Text Understandability for People who are Deaf or Hard of Hearing
Recent advancements in the accuracy of Automated Speech Recognition (ASR) technologies have made them a potential candidate for the task of captioning. However, the presence of errors in the output may present challenges in their use in a fully automatic system. In this research, we are looking more closely into the impact of different inaccurate transcriptions from the ASR system on the understandability of captions for Deaf or Hard-of-Hearing (DHH) individuals. Through a user study with 30 DHH users, we studied the effect of the presence of an error in a text on its understandability for DHH users. We also investigated different prediction models to capture this relation accurately. Among other models, our random forest based model provided the best mean accuracy of 62.04% on the task. Further, we plan to improve this model with more data and use it to advance our investigation on ASR technologies to improve ASR based captioning for DHH users.
6c165e30b7621cbbfc088ef0bd813330a5b1450c
IoT security: A layered approach for attacks & defenses
Internet of Things (IoT) has been a massive advancement in the Information and Communication Technology (ICT). It is projected that over 50 billion devices will become part of the IoT in the next few years. Security of the IoT network should be the foremost priority. In this paper, we evaluate the security challenges in the four layers of the IoT architecture and their solutions proposed from 2010 to 2016. Furthermore, important security technologies like encryption are also analyzed in the IoT context. Finally, we discuss countermeasures of the security attacks on different layers of IoT and highlight the future research directions within the IoT architecture.
7925d49dfae7e062d6cf39416a0c3105dd2414c6
Foldio: Digital Fabrication of Interactive and Shape-Changing Objects With Foldable Printed Electronics
Foldios are foldable interactive objects with embedded input sensing and output capabilities. Foldios combine the advantages of folding for thin, lightweight and shape-changing objects with the strengths of thin-film printed electronics for embedded sensing and output. To enable designers and end-users to create highly custom interactive foldable objects, we contribute a new design and fabrication approach. It makes it possible to design the foldable object in a standard 3D environment and to easily add interactive high-level controls, eliminating the need to manually design a fold pattern and low-level circuits for printed electronics. Second, we contribute a set of printable user interface controls for touch input and display output on folded objects. Moreover, we contribute controls for sensing and actuation of shape-changeable objects. We demonstrate the versatility of the approach with a variety of interactive objects that have been fabricated with this framework.
1e1c3c3f1ad33e1424584ba7b2f5b6681b842dce
Empirical Guidance on Scatterplot and Dimension Reduction Technique Choices
To verify cluster separation in high-dimensional data, analysts often reduce the data with a dimension reduction (DR) technique, and then visualize it with 2D Scatterplots, interactive 3D Scatterplots, or Scatterplot Matrices (SPLOMs). With the goal of providing guidance between these visual encoding choices, we conducted an empirical data study in which two human coders manually inspected a broad set of 816 scatterplots derived from 75 datasets, 4 DR techniques, and the 3 previously mentioned scatterplot techniques. Each coder scored all color-coded classes in each scatterplot in terms of their separability from other classes. We analyze the resulting quantitative data with a heatmap approach, and qualitatively discuss interesting scatterplot examples. Our findings reveal that 2D scatterplots are often 'good enough', that is, neither SPLOM nor interactive 3D adds notably more cluster separability with the chosen DR technique. If 2D is not good enough, the most promising approach is to use an alternative DR technique in 2D. Beyond that, SPLOM occasionally adds additional value, and interactive 3D rarely helps but often hurts in terms of poorer class separation and usability. We summarize these results as a workflow model and implications for design. Our results offer guidance to analysts during the DR exploration process.
c8b16a237b5f46b8ad6de013d140ddba41fff614
Genetic analysis of host resistance: Toll-like receptor signaling and immunity at large.
Classical genetic methods, driven by phenotype rather than hypotheses, generally permit the identification of all proteins that serve nonredundant functions in a defined biological process. Long before this goal is achieved, and sometimes at the very outset, genetics may cut to the heart of a biological puzzle. So it was in the field of mammalian innate immunity. The positional cloning of a spontaneous mutation that caused lipopolysaccharide resistance and susceptibility to Gram-negative infection led directly to the understanding that Toll-like receptors (TLRs) are essential sensors of microbial infection. Other mutations, induced by the random germ line mutagen ENU (N-ethyl-N-nitrosourea), have disclosed key molecules in the TLR signaling pathways and helped us to construct a reasonably sophisticated portrait of the afferent innate immune response. A still broader genetic screen--one that detects all mutations that compromise survival during infection--is permitting fresh insight into the number and types of proteins that mammals use to defend themselves against microbes.
259c25242db4a0dc1e1b5e61fd059f8949bdb79d
Parallel geometric algorithms for multi-core computers
Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The d-dimensional algorithms we describe are (a) spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) kd-tree construction, (c) axis-aligned box intersection computation, and finally (d) bulk insertion of points in Delaunay triangulations for mesh generation algorithms or simply computing Delaunay triangulations. We show experimental results for these algorithms in 3D, using our implementations based on the Computational Geometry Algorithms Library (CGAL, http://www.cgal.org/). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention.
a9116f261c1b6ca543cba3ee95f846ef3934efad
Facial Component-Landmark Detection With Weakly-Supervised LR-CNN
In this paper, we propose a weakly supervised landmark-region-based convolutional neural network (LR-CNN) framework to detect facial component and landmark simultaneously. Most of the existing course-to-fine facial detectors fail to detect landmark accurately without lots of fully labeled data, which are costly to obtain. We can handle the task with a small amount of finely labeled data. First, deep convolutional generative adversarial networks are utilized to generate training samples with weak labels, as data preparation. Then, through weakly supervised learning, our LR-CNN model can be trained effectively with a small amount of finely labeled data and a large amount of generated weakly labeled data. Notably, our approach can handle the situation when large occlusion areas occur, as we localize visible facial components before predicting corresponding landmarks. Detecting unblocked components first helps us to focus on the informative area, resulting in a better performance. Additionally, to improve the performance of the above tasks, we design two models as follows: 1) we add AnchorAlign in the region proposal networks to accurately localize components and 2) we propose a two-branch model consisting classification branch and regression branch to detect landmark. Extensive evaluations on benchmark datasets indicate that our proposed approach is able to complete the multi-task facial detection and outperforms the state-of-the-art facial component and landmark detection algorithms.
089e7c81521f43c5f4ae0ec967d668bc9ea73db7
On-Line Fingerprint Verification
Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is so tedious, time-consuming, and expensive that it is incapable of meeting today’s increasing performance requirements. An automatic fingerprint identification system (AFIS) is widely needed. It plays a very important role in forensic and civilian applications such as criminal identification, access control, and ATM card verification. This paper describes the design and implementation of an on-line fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al., which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an on-line inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of on-line verification with high accuracy.
16155ac9c52a11f732a020adaad457c36655969c
Improving Acoustic Models in TORGO Dysarthric Speech Database
Assistive speech-based technologies can improve the quality of life for people affected with dysarthria, a motor speech disorder. In this paper, we explore multiple ways to improve Gaussian mixture model and deep neural network (DNN) based hidden Markov model (HMM) automatic speech recognition systems for TORGO dysarthric speech database. This work shows significant improvements over the previous attempts in building such systems in TORGO. We trained speaker-specific acoustic models by tuning various acoustic model parameters, using speaker normalized cepstral features and building complex DNN-HMM models with dropout and sequence-discrimination strategies. The DNN-HMM models for severe and severe-moderate dysarthric speakers were further improved by leveraging specific information from dysarthric speech to DNN models trained on audio files from both dysarthric and normal speech, using generalized distillation framework. To the best of our knowledge, this paper presents the best recognition accuracies for TORGO database till date.
ac4a2337afdf63e9b3480ce9025736d71f8cec1a
A wearable system to assist walking of Parkinson s disease patients.
BACKGROUND About 50% of the patients with advanced Parkinson's disease (PD) suffer from freezing of gait (FOG), which is a sudden and transient inability to walk. It often causes falls, interferes with daily activities and significantly impairs quality of life. Because gait deficits in PD patients are often resistant to pharmacologic treatment, effective non-pharmacologic treatments are of special interest. OBJECTIVES The goal of our study is to evaluate the concept of a wearable device that can obtain real-time gait data, processes them and provides assistance based on pre-determined specifications. METHODS We developed a real-time wearable FOG detection system that automatically provides a cueing sound when FOG is detected and which stays until the subject resumes walking. We evaluated our wearable assistive technology in a study with 10 PD patients. Over eight hours of data was recorded and a questionnaire was filled out by each patient. RESULTS Two hundred and thirty-seven FOG events have been identified by professional physiotherapists in a post-hoc video analysis. The device detected the FOG events online with a sensitivity of 73.1% and a specificity of 81.6% on a 0.5 sec frame-based evaluation. CONCLUSIONS With this study we show that online assistive feedback for PD patients is possible. We present and discuss the patients' and physiotherapists' perspectives on wearability and performance of the wearable assistant as well as their gait performance when using the assistant and point out the next research steps. Our results demonstrate the benefit of such a context-aware system and motivate further studies.
eafcdab44124661cdeba5997d4e2ca3cf5a7627e
Acne and Rosacea
Acne, one of the most common skin diseases, affects approximately 85% of the adolescent population, and occurs most prominently at skin sites with a high density of sebaceous glands such as the face, back, and chest. Although often considered a disease of teenagers, acne is occurring at an increasingly early age. Rosacea is a chronic facial inflammatory dermatosis characterized by flushing (or transient facial erythema), persistent central facial erythema, inflammatory papules/pustules, and telangiectasia. Both acne and rosacea have a multifactorial pathology that is incompletely understood. Increased sebum production, keratinocyte hyper-proliferation, inflammation, and altered bacterial colonization with Propionibacterium acnes are considered to be the underlying disease mechanisms in acne, while the multifactorial pathology of rosacea is thought to involve both vasoactive and neurocutaneous mechanisms. Several advances have taken place in the past decade in the research field of acne and rosacea, encompassing pathogenesis and epidemiology, as well as the development of new therapeutic interventions. In this article, we provide an overview of current perspectives on the pathogenesis and treatment of acne and rosacea, including a summary of findings from recent landmark pathophysiology studies considered to have important implications for future clinical practice. The advancement of our knowledge of the different pathways and regulatory mechanisms underlying acne and rosacea is thought to lead to further advances in the therapeutic pipeline for both conditions, ultimately providing a greater array of treatments to address gaps in current management practices.
59aa6691d7122074cc069e6d9952a2e83e428af5
Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders
Convolutional autoencoders have emerged as popular methods for unsupervised defect segmentation on image data. Most commonly, this task is performed by thresholding a per-pixel reconstruction error based on an `-distance. This procedure, however, leads to large residuals whenever the reconstruction includes slight localization inaccuracies around edges. It also fails to reveal defective regions that have been visually altered when intensity values stay roughly consistent. We show that these problems prevent these approaches from being applied to complex real-world scenarios and that they cannot be easily avoided by employing more elaborate architectures such as variational or feature matching autoencoders. We propose to use a perceptual loss function based on structural similarity that examines inter-dependencies between local image regions, taking into account luminance, contrast, and structural information, instead of simply comparing single pixel values. It achieves significant performance gains on a challenging real-world dataset of nanofibrous materials and a novel dataset of two woven fabrics over state-of-the-art approaches for unsupervised defect segmentation that use per-pixel reconstruction
96a8de3d1c93835515bd8c76aa5257f41e6420cf
Cellulose: fascinating biopolymer and sustainable raw material.
As the most important skeletal component in plants, the polysaccharide cellulose is an almost inexhaustible polymeric raw material with fascinating structure and properties. Formed by the repeated connection of D-glucose building blocks, the highly functionalized, linear stiff-chain homopolymer is characterized by its hydrophilicity, chirality, biodegradability, broad chemical modifying capacity, and its formation of versatile semicrystalline fiber morphologies. In view of the considerable increase in interdisciplinary cellulose research and product development over the past decade worldwide, this paper assembles the current knowledge in the structure and chemistry of cellulose, and in the development of innovative cellulose esters and ethers for coatings, films, membranes, building materials, drilling techniques, pharmaceuticals, and foodstuffs. New frontiers, including environmentally friendly cellulose fiber technologies, bacterial cellulose biomaterials, and in-vitro syntheses of cellulose are highlighted together with future aims, strategies, and perspectives of cellulose research and its applications.
1a345b4ca7acb172c977f5a4623138ce83e485b1
Virtual Dermatologist: An application of 3D modeling to tele-healthcare
In this paper, we present preliminary results towards the development of the Virtual Dermatologist: A 3D image and tactile database for virtual examination of dermatology patients. This system, which can be installed and operated by non-dermatologists in remotes areas where access to a dermatologist is difficult, will enhance and broaden the application of tele-healthcare, and it will greatly facilitate the surveillance and consequent diagnosis of various skin diseases. Unlike other systems that monitor the progress of skin diseases using qualitative data on simple baseline (2D) photography, the proposed system will also allow for the quantitative assessment of the progress of the disease over time (e.g. thickness, size, roughness, etc). In fact, the 3D model created by the proposed system will let the dermatologist perform dermatoscopic-like examinations over specially annotated areas of the 3D model of the patient's body (i.e. higher definition areas of the 3D model). As part of its future development, the system will also allow the dermatologist to virtually touch and feel the lesion through a haptic interface. In its current form, the system can detect skin lesions smaller than 1mm, as we demonstrate in the result section.
dfa4765a2cd3e8910ef6e56f0b40e70b4881d56a
A tool-supported compliance process for software systems
Laws and regulations impact the design of software systems, as they may introduce additional requirements and possible conflicts with pre-existing requirements. We propose a systematic, tool-supported process for establishing compliance of a software system with a given law. The process elicits new requirements from the law, compares them with existing ones and manages conflicts, exploiting a set of heuristics, partially supported by a tool. We illustrate our proposal through an exploratory study using the Italian Privacy Law. We also present results of a preliminary empirical study that indicates that adoption of the process improves compliance analysis for a simple compliance scenario.
9fc1d0e4da751a09b49f5b0f7e61eb71d587c20f
Adapting microsoft SQL server for cloud computing
Cloud SQL Server is a relational database system designed to scale-out to cloud computing workloads. It uses Microsoft SQL Server as its core. To scale out, it uses a partitioned database on a shared-nothing system architecture. Transactions are constrained to execute on one partition, to avoid the need for two-phase commit. The database is replicated for high availability using a custom primary-copy replication scheme. It currently serves as the storage engine for Microsoft's Exchange Hosted Archive and SQL Azure.
0e410a7baeae7f1c8676a6c72898650d1f144ba5
An end-to-end approach to host mobility
We present the design and implementation of an end-to-end architecture for Internet host mobility using dynamic updates to the Domain Name System (DNS) to track host location. Existing TCP connections are retained using secure and efficient connection migration, enabling established connections to seamlessly negotiate a change in endpoint IP addresses without the need for a third party. Our architecture is secure—name updates are effected via the secure DNS update protocol, while TCP connection migration uses a novel set of Migrate options—and provides a pure end-system alternative to routing-based approaches such as Mobile IP. Mobile IP was designed under the principle that fixed Internet hosts and applications were to remain unmodified and only the underlying IP substrate should change. Our architecture requires no changes to the unicast IP substrate, instead modifying transport protocols and applications at the end hosts. We argue that this is not a hindrance to deployment; rather, in a significant number of cases, it allows for an easier deployment path than Mobile IP, while simultaneously giving better performance. We compare and contrast the strengths of end-to-end and network-layer mobility schemes, and argue that end-to-end schemes are better suited to many common mobile applications. Our performance experiments show that hand-off times are governed by TCP migrate latencies, and are on the order of a round-trip time of the communicating peers.
00da506d8b50ba47313feb642c0caef2352080bd
Ocular Pain and Impending Blindness During Facial Cosmetic Injections: Is Your Office Prepared?
Soft tissue filler injections are the second most common non-surgical procedure performed by the plastic surgeon. Embolization of intravascular material after facial injection is a rare but terrifying outcome due to the high likelihood of long-term sequela such as blindness and cerebrovascular accident. The literature is replete with examples of permanent blindness caused by injection with autologous fat, soft tissue fillers such as hyaluronic acid, PLLA, calcium hydroxyl-apatite, and even corticosteroid suspensions. However, missing from the discussion is an effective treatment algorithm that can be quickly and safely followed by injecting physicians in the case of an intravascular injection with impending blindness. In this report, we present the case of a 64-year-old woman who suffered from blindness and hemiparesis after facial cosmetic injections performed by a family physician. We use this case to create awareness that this complication has become more common as the number of injectors and patients seeking these treatments have increased exponentially over the past few years. We share in this study our experience with the incorporation of a “blindness safety kit” in each of our offices to promptly initiate treatment in someone with embolization and impending blindness. The kit contains a step-by-step protocol to follow in the event of arterial embolization of filler material associated with ocular pain and impending loss of vision. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
577f5fcadbb97d73c1a41a4fcb17873ad959319c
CATS: Collection and Analysis of Tweets Made Simple
Twitter presents an unparalleled opportunity for researchers from various fields to gather valuable and genuine textual data from millions of people. However, the collection pro-cess, as well as the analysis of these data require different kinds of skills (e.g. programing, data mining) which can be an obstacle for people who do not have this background. In this paper we present CATS, an open source, scalable, Web application designed to support researchers who want to carry out studies based on tweets. The purpose of CATS is twofold: (i) allow people to collect tweets (ii) enable them to analyze these tweets thanks to efficient tools (e.g. event detection, named-entity recognition, topic modeling, word-clouds). What is more, CATS relies on a distributed imple-mentation which can deal with massive data streams.
c7af905452b3d70a7da377c2e31ccf364e8dbed8
Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation
In optimization, multiple objectives and constraints cannot be handled independently of the underlying optimizer. Requirements such as continuity and di erentiability of the cost surface add yet another con icting element to the decision process. While \better" solutions should be rated higher than \worse" ones, the resulting cost landscape must also comply with such requirements. Evolutionary algorithms (EAs), which have found application in many areas not amenable to optimization by other methods, possess many characteristics desirable in a multiobjective optimizer, most notably the concerted handling of multiple candidate solutions. However, EAs are essentially unconstrained search techniques which require the assignment of a scalar measure of quality, or tness, to such candidate solutions. After reviewing current evolutionary approaches to multiobjective and constrained optimization, the paper proposes that tness assignment be interpreted as, or at least related to, a multicriterion decision process. A suitable decision making framework based on goals and priorities is subsequently formulated in terms of a relational operator, characterized, and shown to encompass a number of simpler decision strategies. Finally, the ranking of an arbitrary number of candidates is considered. The e ect of preference changes on the cost surface seen by an EA is illustrated graphically for a simple problem. The paper concludes with the formulation of a multiobjective genetic algorithm based on the proposed decision strategy. Niche formation techniques are used to promote diversity among preferable candidates, and progressive articulation of preferences is shown to be possible as long as the genetic algorithm can recover from abrupt changes in the cost landscape.