_id
stringlengths
40
40
title
stringlengths
8
300
text
stringlengths
0
10k
bf003bb2d52304fea114d824bc0bf7bfbc7c3106
Dissecting social engineering
10466df2b511239674d8487101229193c011a657
The urgency for effective user privacy-education to counter social engineering attacks on secure computer systems
Trusted people can fail to be trustworthy when it comes to protecting their aperture of access to secure computer systems due to inadequate education, negligence, and various social pressures. People are often the weakest link in an otherwise secure computer system and, consequently, are targeted for social engineering attacks. Social Engineering is a technique used by hackers or other attackers to gain access to information technology systems by getting the needed information (for example, a username and password) from a person rather than breaking into the system through electronic or algorithmic hacking methods. Such attacks can occur on both a physical and psychological level. The physical setting for these attacks occurs where a victim feels secure: often the workplace, the phone, the trash, and even on-line. Psychology is often used to create a rushed or officious ambiance that helps the social engineer to cajole information about accessing the system from an employee. Data privacy legislation in the United States and international countries that imposes privacy standards and fines for negligent or willful non-compliance increases the urgency to measure the trustworthiness of people and systems. One metric for determining compliance is to simulate, by audit, a social engineering attack upon an organization required to follow data privacy standards. Such an organization commits to protect the confidentiality of personal data with which it is entrusted. This paper presents the results of an approved social engineering audit made without notice within an organization where data security is a concern. Areas emphasized include experiences between the Social Engineer and the audited users, techniques used by the Social Engineer, and other findings from the audit. Possible steps to mitigate exposure to the dangers of Social Engineering through improved user education are reviewed.
24b4076e2f58325f5d86ba1ca1f00b08a56fb682
Ontologies: principles, methods and applications
This paper is intended to serve as a comprehensive introduction to the emerging eld concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools, and techniques are a major barrier to e ective communication among people, organisations, and/or software systems. We show how the development and implementation of an explicit account of a shared understanding (i.e. an `ontology') in a given subject area, can improve such communication, which in turn, can give rise to greater reuse and sharing, inter-operability, and more reliable software. After motivating their need, we clarify just what ontologies are and what purposes they serve. We outline a methodology for developing and evaluating ontologies, rst discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing de nitions. We then consider the bene ts of and describe, a more formal approach. We re-visit the scoping phase, and discuss the role of formal languages and techniques in the speci cation, implementation and evaluation of ontologies. Finally, we review the state of the art and practice in this emerging eld, considering various case studies, software tools for ontology development, key research issues and future prospects. AIAI-TR-191 Ontologies Page i
90b16f97715a18a52b6a00b69411083bdb0460a0
Highly Sensitive, Flexible, and Wearable Pressure Sensor Based on a Giant Piezocapacitive Effect of Three-Dimensional Microporous Elastomeric Dielectric Layer.
We report a flexible and wearable pressure sensor based on the giant piezocapacitive effect of a three-dimensional (3-D) microporous dielectric elastomer, which is capable of highly sensitive and stable pressure sensing over a large tactile pressure range. Due to the presence of micropores within the elastomeric dielectric layer, our piezocapacitive pressure sensor is highly deformable by even very small amounts of pressure, leading to a dramatic increase in its sensitivity. Moreover, the gradual closure of micropores under compression increases the effective dielectric constant, thereby further enhancing the sensitivity of the sensor. The 3-D microporous dielectric layer with serially stacked springs of elastomer bridges can cover a much wider pressure range than those of previously reported micro-/nanostructured sensing materials. We also investigate the applicability of our sensor to wearable pressure-sensing devices as an electronic pressure-sensing skin in robotic fingers as well as a bandage-type pressure-sensing device for pulse monitoring at the human wrist. Finally, we demonstrate a pressure sensor array pad for the recognition of spatially distributed pressure information on a plane. Our sensor, with its excellent pressure-sensing performance, marks the realization of a true tactile pressure sensor presenting highly sensitive responses to the entire tactile pressure range, from ultralow-force detection to high weights generated by human activity.
8f24560a66651fdb94eef61339527004fda8283b
Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning
Humans are able to understand and perform complex tasks by strategically structuring the tasks into incremental steps or subgoals. For a robot attempting to learn to perform a sequential task with critical subgoal states, such states can provide a natural opportunity for interaction with a human expert. This paper analyzes the benefit of incorporating a notion of subgoals into Inverse Reinforcement Learning (IRL) with a Human-In-The-Loop (HITL) framework. The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states. These subgoal states define a set of subtasks for the learning agent to complete in order to achieve the final goal. The learning agent queries for partial demonstrations corresponding to each subtask as needed when the agent struggles with the subtask. The proposed Human Interactive IRL (HI-IRL) framework is evaluated on several discrete path-planning tasks. We demonstrate that subgoal-based interactive structuring of the learning task results in significantly more efficient learning, requiring only a fraction of the demonstration data needed for learning the underlying reward function with the baseline IRL model.
747a58918524d15aca29885af3e1bc87313eb312
A step toward irrationality: using emotion to change belief
Emotions have a powerful impact on behavior and beliefs. The goal of our research is to create general computational models of this interplay of emotion, cognition and behavior to inform the design of virtual humans. Here, we address an aspect of emotional behavior that has been studied extensively in the psychological literature but largely ignored by computational approaches, emotion-focused coping. Rather than motivating external action, emotion-focused coping strategies alter beliefs in response to strong emotions. For example an individual may alter beliefs about the importance of a goal that is being threatened, thereby reducing their distress. We present a preliminary model of emotion-focused coping and discuss how coping processes, in general, can be coupled to emotions and behavior. The approach is illustrated within a virtual reality training environment where the models are used to create virtual human characters in high-stress social situations.
332648a09d6ded93926829dbd81ac9dddf31d5b9
Perching and takeoff of a robotic insect on overhangs using switchable electrostatic adhesion
For aerial robots, maintaining a high vantage point for an extended time is crucial in many applications. However, available on-board power and mechanical fatigue constrain their flight time, especially for smaller, battery-powered aircraft. Perching on elevated structures is a biologically inspired approach to overcome these limitations. Previous perching robots have required specific material properties for the landing sites, such as surface asperities for spines, or ferromagnetism. We describe a switchable electroadhesive that enables controlled perching and detachment on nearly any material while requiring approximately three orders of magnitude less power than required to sustain flight. These electroadhesives are designed, characterized, and used to demonstrate a flying robotic insect able to robustly perch on a wide range of materials, including glass, wood, and a natural leaf.
1facb3308307312789e1db7f0a0904ac9c9e7179
Key parameters influencing the behavior of Steel Plate Shear Walls (SPSW)
The complex behavior of Steel Plate Shear Walls (SPSW) is investigated herein through nonlinear FE simulations. A 3D detailed FE model is developed and validated utilizing experimental results available in the literature. The influence of key parameters on the structural behavior is investigated. The considered parameters are: the infill plate thickness, the beam size, the column size, the infill plate material grade and the frame material grade. Several structural responses are used as criteria to quantify their influence on the SPSW behavior. The evaluated structural responses are: yield strength, yield displacement, ultimate strength, initial stiffness and secondary stiffness. The results show that, overall the most influential parameter is the infill plate thickness followed by the beam size. Also, it was found that the least influential parameter is the frame material grade.
236f183be06d824122da59ffb79e501d1a537486
Design for Reliability of Low-voltage, Switched-capacitor Circuits
Design for Reliability of Low-voltage, Switched-capacitor Circuits by Andrew Masami Abo Doctor of Philosophy in Engineering University of California, Berkeley Professor Paul R. Gray, Chair Analog, switched-capacitor circuits play a critical role in mixed-signal, analogto-digital interfaces. They implement a large class of functions, such as sampling, filtering, and digitization. Furthermore, their implementation makes them suitable for integration with complex, digital-signal-processing blocks in a compatible, low-cost technology–particularly CMOS. Even as an increasingly larger amount of signal processing is done in the digital domain, this critical, analogto-digital interface is fundamentally necessary. Examples of some integrated applications include camcorders, wireless LAN transceivers, digital set-top boxes, and others. Advances in CMOS technology, however, are driving the operating voltage of integrated circuits increasingly lower. As device dimensions shrink, the applied voltages will need to be proportionately scaled in order to guarantee long-term reliability and manage power density. The reliability constraints of the technology dictate that the analog circuitry operate at the same low voltage as the digital circuitry. Furthermore, in achieving low-voltage operation, the reliability constraints of the technology must not be violated. This work examines the voltage limitations of CMOS technology and how analog circuits can maximize the utility of MOS devices without degrading relia-
83834cd33996ed0b00e3e0fca3cda413d7ed79ff
DWCMM: The Data Warehouse Capability Maturity Model
Data Warehouses and Business Intelligence have become popular fields of research in recent years. Unfortunately, in daily practice many Data Warehouse and Business Intelligence solutions still fail to help organizations make better decisions and increase their profitability, due to intransparent complexities and project interdependencies. In addition, emerging application domains such as Mobile Learning & Analytics heavily depend on a wellstructured data foundation with a longitudinally prepared architecture. Therefore, this research presents the Data Warehouse Capability Maturity Model (DWCMM) which encompasses both technical and organizational aspects involved in developing a Data Warehouse environment. The DWCMM can be used to help organizations assess their current Data Warehouse solution and provide them with guidelines for future improvements. The DWCMM consists of a maturity matrix and a maturity assessment questionnaire with 60 questions. The DWCMM has been evaluated empirically through expert interviews and case studies. We conclude that the DWCMM can be successfully applied in practice and that organizations can intelligibly utilize the DWCMM as a quickscan instrument to jumpstart their Data Warehouse and Business Intelligence improvement processes.
89a9ad85d8343a622aaa8c072beacaf8df1f0464
Multiple-resonator-based bandpass filters
This article describes a class of recently developed multiple-mode-resonator-based bandpass filters for ultra-wide-band (UWB) transmission systems. These filters have many attractive features, including a simple design, compact size, low loss and good linearity in the UWB, enhanced out-of-band rejection, and easy integration with other circuits/antennas. In this article, we present a variety of multiple-mode resonators with stepped-impedance or stub-loaded nonuniform configurations and analyze their properties based on the transmission line theory. Along with the frequency dispersion of parallel-coupled transmission lines, we design and implement various filter structures on planar, uniplanar, and hybrid transmission line geometries.
1ca75a68d6769df095ac3864d86bca21e9650985
Enhanced ARP: preventing ARP poisoning-based man-in-the-middle attacks
In this letter, an enhanced version of Address Resolution Protocol (ARP) is proposed to prevent ARP poisoning-based Man-in-the-Middle (MITM) attacks. The proposed mechanism is based on the following concept. When a node knows the correct Media Access Control (MAC) address for a given IP address, if it retains the IP/MAC address mapping while that machine is alive, then MITM attack is impossible for that IP address. In order to prevent MITM attacks even for a new IP address, a voting-based resolution mechanism is proposed. The proposed scheme is backward compatible with existing ARP and incrementally deployable.
9c13e54760455a50482cda070c70448ecf30d68c
Time series classification with ensembles of elastic distance measures
Several alternative distance measures for comparing time series have recently been proposed and evaluated on time series classification (TSC) problems. These include variants of dynamic time warping (DTW), such as weighted and derivative DTW, and edit distance-based measures, including longest common subsequence, edit distance with real penalty, time warp with edit, and move–split–merge. These measures have the common characteristic that they operate in the time domain and compensate for potential localised misalignment through some elastic adjustment. Our aim is to experimentally test two hypotheses related to these distance measures. Firstly, we test whether there is any significant difference in accuracy for TSC problems between nearest neighbour classifiers using these distance measures. Secondly, we test whether combining these elastic distance measures through simple ensemble schemes gives significantly better accuracy. We test these hypotheses by carrying out one of the largest experimental studies ever conducted into time series classification. Our first key finding is that there is no significant difference between the elastic distance measures in terms of classification accuracy on our data sets. Our second finding, and the major contribution of this work, is to define an ensemble classifier that significantly outperforms the individual classifiers. We also demonstrate that the ensemble is more accurate than approaches not based in the time domain. Nearly all TSC papers in the data mining literature cite DTW (with warping window set through cross validation) as the benchmark for comparison. We believe that our ensemble is the first ever classifier to significantly outperform DTW and as such raises the bar for future work in this area.
8c76872375aa79acb26871c93da76d90dfb0a950
Recovering punctuation marks for automatic speech recognition
This paper shows results of recovering punctuation over speech transcriptions for a Portuguese broadcast news corpus. The approach is based on maximum entropy models and uses word, part-of-speech, time and speaker information. The contribution of each type of feature is analyzed individually. Separate results for each focus condition are given, making it possible to analyze the differences of performance between planned and spontaneous speech.
7e2eb3402ea7eacf182bccc3f8bb685636098d2c
Optical character recognition of the Orthodox Hellenic Byzantine Music notation
In this paper we present for the first time, the development of a new system for the off-line optical recognition of the characters used in the Orthodox Hellenic Byzantine Music Notation, that has been established since 1814. We describe the structure of the new system and propose algorithms for the recognition of the 71 distinct character classes, based on Wavelets, 4-projections and other structural and statistical features. Using a Nearest Neighbor classifier, combined with a post classification schema and a tree-structured classification philosophy, an accuracy of 99.4 % was achieved, in a database of about 18,000 byzantine character patterns that have been developed for the needs of the system. Optical music recognition Off-line character recognition, Byzantine Music, Byzantine Music Notation, Wavelets, Projections, Neural Networks Contour processing, Nearest Neighbor Classifier Byzantine Music Data Base
26d4ab9b60b91bb610202b58fa1766951fedb9e9
DRAW: A Recurrent Neural Network For Image Generation
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
4805aee558489b5413ce5434737043148537f62f
A Comparison of Features for Android Malware Detection
With the increase in mobile device use, there is a greater need for increasingly sophisticated malware detection algorithms. The research presented in this paper examines two types of features of Android applications, permission requests and system calls, as a way to detect malware. We are able to differentiate between benign and malicious apps by applying a machine learning algorithm. The model that is presented here achieved a classification accuracy of around 80% using permissions and 60% using system calls for a relatively small dataset. In the future, different machine learning algorithms will be examined to see if there is a more suitable algorithm. More features will also be taken into account and the training set will be expanded.
608ec914e356ff5e5782c908016958bf650a946f
CogALex-V Shared Task: GHHH - Detecting Semantic Relations via Word Embeddings
This paper describes our system submission to the CogALex-2016 Shared Task on Corpus-Based Identification of Semantic Relations. Our system won first place for Task-1 and second place for Task-2. The evaluation results of our system on the test set is 88.1% (79.0% for TRUE only) f-measure for Task-1 on detecting semantic similarity, and 76.0% (42.3% when excluding RANDOM) for Task-2 on identifying fine-grained semantic relations. In our experiments, we try word analogy, linear regression, and multi-task Convolutional Neural Networks (CNNs) with word embeddings from publicly available word vectors. We found that linear regression performs better in the binary classification (Task-1), while CNNs have better performance in the multi-class semantic classification (Task-2). We assume that word analogy is more suited for deterministic answers rather than handling the ambiguity of one-to-many and many-to-many relationships. We also show that classifier performance could benefit from balancing the distribution of labels in the training data.
5ecbb84d51e2a23dadd496d7c6ab10cf277d4452
A 5-DOF rotation-symmetric parallel manipulator with one unconstrained tool rotation
This paper introduces a novel 5-DOF parallel manipulator with a rotation-symmetric arm system. The manipulator is unorthodox since one degree of freedom of its manipulated platform is unconstrained. Such a manipulator is still useful in a wide range of applications utilizing a rotation-symmetric tool. The manipulator workspace is analyzed for singularities and collisions. The rotation-symmetric arm system leads to a large positional workspace in relation to the footprint of the manipulator. With careful choice of structural parameters, the rotational workspace of the tool is also sizeable.
c79ddcef4bdf56c5467143b32e53b23825c17eff
A Framework based on SDN and Containers for Dynamic Service Chains on IoT Gateways
In this paper, we describe a new approach for managing service function chains in scenarios where data from Internet of Things (IoT) devices is partially processed at the network edge. Our framework is enabled by two emerging technologies, Software-Defined Networking (SDN) and container based virtualization, which ensure several benefits in terms of flexibility, easy programmability, and versatility. These features are well suitable with the increasingly stringent requirements of IoT applications, and allow a dynamic and automated network service chaining. An extensive performance evaluation, which has been carried out by means of a testbed, seeks to understand how our proposed framework performs in terms of computational overhead, network bandwidth, and energy consumption. By accounting for the constraints of typical IoT gateways, our evaluation tries to shed light on the actual deployability of the framework on low-power nodes.
489555f05e316015d24d2a1fdd9663d4b85eb60f
Diagnostic Accuracy of Clinical Tests for Morton's Neuroma Compared With Ultrasonography.
The aim of the present study was to assess the diagnostic accuracy of 7 clinical tests for Morton's neuroma (MN) compared with ultrasonography (US). Forty patients (54 feet) were diagnosed with MN using predetermined clinical criteria. These patients were subsequently referred for US, which was performed by a single, experienced musculoskeletal radiologist. The clinical test results were compared against the US findings. MN was confirmed on US at the site of clinical diagnosis in 53 feet (98%). The operational characteristics of the clinical tests performed were as follows: thumb index finger squeeze (96% sensitivity, 96% accuracy), Mulder's click (61% sensitivity, 62% accuracy), foot squeeze (41% sensitivity, 41% accuracy), plantar percussion (37% sensitivity, 36% accuracy), dorsal percussion (33% sensitivity, 26% accuracy), and light touch and pin prick (26% sensitivity, 25% accuracy). No correlation was found between the size of MN on US and the positive clinical tests, except for Mulder's click. The size of MN was significantly larger in patients with a positive Mulder's click (10.9 versus 8.5 mm, p = .016). The clinical assessment was comparable to US in diagnosing MN. The thumb index finger squeeze test was the most sensitive screening test for the clinical diagnosis of MN.
206723950b10580ced733cbacbfc23c85b268e13
Why Lurkers Lurk
The goal of this paper is to address the question: ‘why do lurkers lurk?’ Lurkers reportedly makeup the majority of members in online groups, yet little is known about them. Without insight into lurkers, our understanding of online groups is incomplete. Ignoring, dismissing, or misunderstanding lurking distorts knowledge of life online and may lead to inappropriate design of online environments. To investigate lurking, the authors carried out a study of lurking using in-depth, semi-structured interviews with ten members of online groups. 79 reasons for lurking and seven lurkers’ needs are identified from the interview transcripts. The analysis reveals that lurking is a strategic activity involving more than just reading posts. Reasons for lurking are categorized and a gratification model is proposed to explain lurker behavior.
22d185c7ba066468f9ff1df03f1910831076e943
Learning Better Embeddings for Rare Words Using Distributional Representations
There are two main types of word representations: low-dimensional embeddings and high-dimensional distributional vectors, in which each dimension corresponds to a context word. In this paper, we initialize an embedding-learning model with distributional vectors. Evaluation on word similarity shows that this initialization significantly increases the quality of embeddings for rare words.
f1b3400e49a929d9f5bd1b15081a13120abc3906
Text comparison using word vector representations and dimensionality reduction
This paper describes a technique to compare large text sources using word vector representations (word2vec) and dimensionality reduction (tSNE) and how it can be implemented using Python. The technique provides a bird’s-eye view of text sources, e.g. text summaries and their source material, and enables users to explore text sources like a geographical map. Word vector representations capture many linguistic properties such as gender, tense, plurality and even semantic concepts like "capital city of". Using dimensionality reduction, a 2D map can be computed where semantically similar words are close to each other. The technique uses the word2vec model from the gensim Python library and t-SNE from scikit-learn.
e49d662652885e9b71622713838c840cca9d33ed
Engineering Quality and Reliability in Technology-Assisted Review
The objective of technology-assisted review ("TAR") is to find as much relevant information as possible with reasonable effort. Quality is a measure of the extent to which a TAR method achieves this objective, while reliability is a measure of how consistently it achieves an acceptable result. We are concerned with how to define, measure, and achieve high quality and high reliability in TAR. When quality is defined using the traditional goal-post method of specifying a minimum acceptable recall threshold, the quality and reliability of a TAR method are both, by definition, equal to the probability of achieving the threshold. Assuming this definition of quality and reliability, we show how to augment any TAR method to achieve guaranteed reliability, for a quantifiable level of additional review effort. We demonstrate this result by augmenting the TAR method supplied as the baseline model implementation for the TREC 2015 Total Recall Track, measuring reliability and effort for 555 topics from eight test collections. While our empirical results corroborate our claim of guaranteed reliability, we observe that the augmentation strategy may entail disproportionate effort, especially when the number of relevant documents is low. To address this limitation, we propose stopping criteria for the model implementation that may be applied with no additional review effort, while achieving empirical reliability that compares favorably to the provably reliable method. We further argue that optimizing reliability according to the traditional goal-post method is inconsistent with certain subjective aspects of quality, and that optimizing a Taguchi quality loss function may be more apt.
e75cb14344eaeec987aa571d0009d0e02ec48a63
Design of Highly Integrated Mechatronic Gear Selector Levers for Automotive Shift-by-Wire Systems
Increased requirements regarding ergonomic comfort, limited space, weight reduction, and electronic automation of functions and safety features are on the rise for future automotive gear levers. At the same time, current mechanical gear levers have restrictions to achieve this. In this paper, we present a monostable, miniaturized mechatronic gear lever to fulfill these requirements for automotive applications. This solution describes a gear lever for positioning in the center console of a car to achieve optimal ergonomics for dynamic driving, which enables both automatic and manual gear switching. In this paper, we describe the sensor and actuator concept, safety concept, recommended shift pattern, mechanical design, and the electronic integration of this shift-by-wire system in a typical automotive bus communication network. The main contribution of this paper is a successful system design and the integration of a mechatronic system in new applications for optimizing the human-machine interface inside road vehicles.
66c410a2567e96dcff135bf6582cb26c9df765c4
Batch Identification Game Model for Invalid Signatures in Wireless Mobile Networks
Secure access is one of the fundamental problems in wireless mobile networks. Digital signature is a widely used technique to protect messages’ authenticity and nodes’ identities. From the practical perspective, to ensure the quality of services in wireless mobile networks, ideally the process of signature verification should introduce minimum delay. Batch cryptography technique is a powerful tool to reduce verification time. However, most of the existing works focus on designing batch verification algorithms for wireless mobile networks without sufficiently considering the impact of invalid signatures, which can lead to verification failures and performance degradation. In this paper, we propose a Batch Identification Game Model (BIGM) in wireless mobile networks, enabling nodes to find invalid signatures with reasonable delay no matter whether the game scenario is complete information or incomplete information. Specifically, we analyze and prove the existence of Nash Equilibriums (NEs) in both scenarios, to select the dominant algorithm for identifying invalid signatures. To optimize the identification algorithm selection, we propose a self-adaptive auto-match protocol which estimates the strategies and states of attackers based on historical information. Comprehensive simulation results in terms of NE reasonability, algorithm selection accuracy, and identification delay are provided to demonstrate that BIGM can identify invalid signatures more efficiently than existing algorithms.
9a59a3719bf08105d4632898ee178bd982da2204
International Journal of Advanced Robotic Systems Design of a Control System for an Autonomous Vehicle Based on Adaptive-PID Regular Paper
The autonomous vehicle is a mobile robot integrating multi‐sensor navigation and positioning, intelligent decision making and control technology. This paper presents the control system architecture of the autonomous vehicle, called “Intelligent Pioneer”, and the path tracking and stability of motion to effectively navigate in unknown environments is discussed. In this approach, a two degree‐of‐freedom dynamic model is developed to formulate the path‐tracking problem in state space format. For controlling the instantaneous path error, traditional controllers have difficulty in guaranteeing performance and stability over a wide range of parameter changes and disturbances. Therefore, a newly developed adaptive‐PID controller will be used. By using this approach the flexibility of the vehicle control system will be increased and achieving great advantages. Throughout, we provide examples and results from Intelligent Pioneer and the autonomous vehicle using this approach competed in the 2010 and 2011 Future Challenge of China. Intelligent Pioneer finished all of the competition programmes and won first position in 2010 and third position in 2011.
17ebe1eb19655543a6b876f91d41917488e70f55
Random synaptic feedback weights support error backpropagation for deep learning
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
57b199e1d22752c385c34191c1058bcabb850d9f
Getting Formal with Dopamine and Reward
Recent neurophysiological studies reveal that neurons in certain brain structures carry specific signals about past and future rewards. Dopamine neurons display a short-latency, phasic reward signal indicating the difference between actual and predicted rewards. The signal is useful for enhancing neuronal processing and learning behavioral reactions. It is distinctly different from dopamine's tonic enabling of numerous behavioral processes. Neurons in the striatum, frontal cortex, and amygdala also process reward information but provide more differentiated information for identifying and anticipating rewards and organizing goal-directed behavior. The different reward signals have complementary functions, and the optimal use of rewards in voluntary behavior would benefit from interactions between the signals. Addictive psychostimulant drugs may exert their action by amplifying the dopamine reward signal.
7592f8a1d4fa2703b75cad6833775da2ff72fe7b
Deep Big Multilayer Perceptrons for Digit Recognition
The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent advancement by others dates back 8 years (error rate 0.4%). Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark with a single MLP and 0.31% with a committee of seven MLP. All we need to achieve this until 2011 best result are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning.
9539a0c4f8766c08dbaf96561cf6f1f409f5d3f9
Feature-based attention influences motion processing gain in macaque visual cortex
Changes in neural responses based on spatial attention have been demonstrated in many areas of visual cortex, indicating that the neural correlate of attention is an enhanced response to stimuli at an attended location and reduced responses to stimuli elsewhere. Here we demonstrate non-spatial, feature-based attentional modulation of visual motion processing, and show that attention increases the gain of direction-selective neurons in visual cortical area MT without narrowing the direction-tuning curves. These findings place important constraints on the neural mechanisms of attention and we propose to unify the effects of spatial location, direction of motion and other features of the attended stimuli in a ‘feature similarity gain model’ of attention.
cbcd9f32b526397f88d18163875d04255e72137f
Gradient-based learning applied to document recognition
19e2ad92d0f6ad3a9c76e957a0463be9ac244203
Condition monitoring of helicopter drive shafts using quadratic-nonlinearity metric based on cross-bispectrum
Based on cross-bispectrum, quadratic-nonlinearity coupling between two vibration signals is proposed and used to assess health conditions of rotating shafts in an AH-64D helicopter tail rotor drive train. Vibration data are gathered from two bearings supporting the shaft in an experimental helicopter drive train simulating different shaft conditions, namely, baseline, misalignment, imbalance, and combination of misalignment and imbalance. The proposed metric shows better capabilities in distinguishing different shaft settings than the conventional linear coupling based on cross-power spectrum.
7b24aa024ca2037b097cfcb2ea73a60ab497b80e
Internet security architecture
Fear of security breaches has been a major r eason f or the business world’ s reluctance to embrace the Inter net as a viable means of communication. A widely adopted solution consists of physically separating private networks from the rest of Internet using firewalls. This paper discusses the curr ent cryptographic security measures available for the Internet infrastructur e as an alternative to physical segregation. First the IPsec ar chitecture including security protocols in the Internet Layer and the related key management pr oposals are introduced. The transport layer security protocol and security issues in the netw ork control and management are then presented. The paper is addr essed to r eaders with a basic understanding of common security mechanisms including encryption, authentication and key exchange techniques.
525dc4242b21df23ba4e1ec0748cf46de0e8f5c0
Client attachment, attachment to the therapist and client-therapist attachment match: how do they relate to change in psychodynamic psychotherapy?
OBJECTIVE We examined the associations between client attachment, client attachment to the therapist, and symptom change, as well as the effects of client-therapist attachment match on outcome. Clients (n = 67) and their therapists (n = 27) completed the ECR to assess attachment. METHOD Clients completed also the Client Attachment to Therapist scale three times (early, middle, and late sessions) and the OQ-45 at intake and four times over the course of a year of psychodynamic psychotherapy. RESULTS Clients characterized by avoidant attachment and by avoidant attachment to their therapist showed the least improvement. A low-avoidant client-therapist attachment match led to a greater decrease in symptom distress than when a low-avoidant therapist treated a high-avoidant client. CONCLUSIONS These findings suggest the importance of considering client-therapist attachment matching and the need to pay attention to the special challenges involved in treating avoidant clients in order to facilitate progress in psychotherapy.
476edaffb4e613303012e7321dd319ba23abd0c3
Prioritized multi-task compliance control of redundant manipulators
We propose a new approach for dynamic control of redundant manipulators to deal with multiple prioritized tasks at the same time by utilizing null space projection techniques. The compliance control law is based on a new representation of the dynamics wherein specific null space velocity coordinates are introduced. These allow to efficiently exploit the kinematic redundancy according to the task hierarchy and lead to a dynamics formulation with block-diagonal inertia matrix. The compensation of velocity-dependent coupling terms between the tasks by an additional passive feedback action facilitates a stability analysis for the complete hierarchy based on semi-definite Lyapunov functions. No external forces have to be measured. Finally, the performance of the control approach is evaluated in experiments on a torque-controlled robot. © 2015 Elsevier Ltd. All rights reserved.
fdb1a478c6c566729a82424b3d6b37ca76c8b85e
The Concept of Flow
What constitutes a good life? Few questions are of more fundamental importance to a positive psychology. Flow research has yielded one answer, providing an understanding of experiences during which individuals are fully involved in the present moment. Viewed through the experiential lens of flow, a good life is one that is characterized by complete absorption in what one does. In this chapter, we describe the flow model of optimal experience and optimal development, explain how flow and related constructs have been measured, discuss recent work in this area, and identify some promising directions for future research.
7817db7b898a3458035174d914a7570d0b0efb7b
Corporate social responsibility and bank customer satisfaction A research agenda
Purpose – The purpose of this paper is to explore the relationship between corporate social responsibility (CSR) and customer outcomes. Design/methodology/approach – This paper reviews the literature on CSR effects and satisfaction, noting gaps in the literature. Findings – A series of propositions is put forward to guide future research endeavours. Research limitations/implications – By understanding the likely impact on customer satisfaction of CSR initiatives vis-à-vis customer-centric initiatives, the academic research community can assist managers to understand how to best allocate company resources in situations of low customer satisfaction. Such endeavours are managerially relevant and topical. Researchers seeking to test the propositions put forward in this paper would be able to gain links with, and possibly attract funding from, banks to conduct their research. Such endeavours may assist researchers to redefine the stakeholder view by placing customers at the centre of a network of stakeholders. Practical implications – An understanding of how to best allocate company resources to increase the proportion of satisfied customers will allow bank marketers to reduce customer churn and hence increase market share and profits. Originality/value – Researchers have not previously conducted a comparative analysis of the effects of different CSR initiatives on customer satisfaction, nor considered whether more customer-centric initiatives are likely to be more effective in increasing the proportion of satisfied customers.
60cc377d4d2b885594906d58bacb5732e8a04eb9
Essential Layers, Artifacts, and Dependencies of Enterprise Architecture
After a period where implementation speed was more important than integration, consistency and reduction of complexity, architectural considerations have become a key issue of information management in recent years again. Enterprise architecture is widely accepted as an essential mechanism for ensuring agility and consistency, compliance and efficiency. Although standards like TOGAF and FEAF have developed, however, there is no common agreement on which architecture layers, which artifact types and which dependencies constitute the essence of enterprise architecture. This paper contributes to the identification of essential elements of enterprise architecture by (1) specifying enterprise architecture as a hierarchical, multilevel system comprising aggregation hierarchies, architecture layers and views, (2) discussing enterprise architecture frameworks with regard to essential elements, (3) proposing interfacing requirements of enterprise architecture with other architecture models and (4) matching these findings with current enterprise architecture practice in several large companies.
46e0faacf50c8053d38fb3cf2da7fbbfb2932977
Agent-based control for decentralised demand side management in the smart grid
Central to the vision of the smart grid is the deployment of smart meters that will allow autonomous software agents, representing the consumers, to optimise their use of devices and heating in the smart home while interacting with the grid. However, without some form of coordination, the population of agents may end up with overly-homogeneous optimised consumption patterns that may generate significant peaks in demand in the grid. These peaks, in turn, reduce the efficiency of the overall system, increase carbon emissions, and may even, in the worst case, cause blackouts. Hence, in this paper, we introduce a novel model of a Decentralised Demand Side Management (DDSM) mechanism that allows agents, by adapting the deferment of their loads based on grid prices, to coordinate in a decentralised manner. Specifically, using average UK consumption profiles for 26M homes, we demonstrate that, through an emergent coordination of the agents, the peak demand of domestic consumers in the grid can be reduced by up to 17% and carbon emissions by up to 6%. We also show that our DDSM mechanism is robust to the increasing electrification of heating in UK homes (i.e., it exhibits a similar efficiency).
3d9e919a4de74089f94f5a1b2a167c66c19a241d
Maxillary length at 11-14 weeks of gestation in fetuses with trisomy 21.
OBJECTIVE To determine the value of measuring maxillary length at 11-14 weeks of gestation in screening for trisomy 21. METHODS In 970 fetuses ultrasound examination was carried out for measurement of crown-rump length (CRL), nuchal translucency and maxillary length, and to determine if the nasal bone was present or absent, immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In 60 cases the maxillary length was measured twice by the same operator to calculate the intraobserver variation in measurements. RESULTS The median gestation was 12 (range, 11-14) weeks. The maxilla was successfully examined in all cases. The mean difference between paired measurements of maxillary length was -0.012 mm and the 95% limits of agreement were -0.42 (95% CI, -0.47 to -0.37) to 0.40 (95% CI, 0.35 to 0.44) mm. The fetal karyotype was normal in 839 pregnancies and abnormal in 131, including 88 cases of trisomy 21. In the chromosomally normal group the maxillary length increased significantly with CRL from a mean of 4.8 mm at a CRL of 45 mm to 8.3 mm at a CRL of 84 mm. In the trisomy 21 fetuses the maxillary length was significantly shorter than normal by 0.7 mm and in the trisomy 21 fetuses with absent nasal bone the maxilla was shorter than in those with present nasal bone by 0.5 mm. In fetuses with other chromosomal defects there were no significant differences from normal in the maxillary length. CONCLUSION At 11-14 weeks of gestation, maxillary length in trisomy 21 fetuses is significantly shorter than in normal fetuses.
db9531c2677ab3eeaaf434ccb18ca354438560d6
From e-commerce to social commerce: A close look at design features
E-commerce is undergoing an evolution through the adoption of Web 2.0 capabilities to enhance customer participation and achieve greater economic value. This new phenomenon is commonly referred to as social commerce, however it has not yet been fully understood. In addition to the lack of a stable and agreed-upon definition, there is little research on social commerce and no significant research dedicated to the design of social commerce platforms. This study offers literature review to explain the concept of social commerce, tracks its nascent state-of-the-art, and discusses relevant design features as they relate to e-commerce and Web 2.0. We propose a new model and a set of principles for guiding social commerce design. We also apply the model and guidelines to two leading social commerce platforms, Amazon and Starbucks on Facebook. The findings indicate that, for any social commerce website, it is critical to achieve a minimum set of social commerce design features. These design features must cover all the layers of the proposed model, including the individual, conversation, community and commerce levels. 2012 Elsevier B.V. All rights reserved.
9d420ad78af7366384f77b29e62a93a0325ace77
A spectrogram-based audio fingerprinting system for content-based copy detection
This paper presents a novel audio fingerprinting method that is highly robust to a variety of audio distortions. It is based on an unconventional audio fingerprint generation scheme. The robustness is achieved by generating different versions of the spectrogram matrix of the audio signal by using a threshold based on the average of the spectral values to prune this matrix. We transform each version of this pruned spectrogram matrix into a 2-D binary image. Multiple versions of these 2-D images suppress noise to a varying degree. This varying degree of noise suppression improves likelihood of one of the images matching a reference image. To speed up matching, we convert each image into an n-dimensional vector, and perform a nearest neighbor search based on this n-dimensional vector. We give results with two different feature parameters and their combination. We test this method on TRECVID 2010 content-based copy detection evaluation dataset, and we validate the performance on TRECVID 2009 dataset also. Experimental results show the effectiveness of these features even when the audio is distorted. We compare the proposed method to two state-of-the-art audio copy detection systems, namely NN-based and Shazam systems. Our method by far outperforms Shazam system for all audio transformations (or distortions) in terms of detection performance, number of missed queries and localization accuracy. Compared to NN-based system, our approach reduces minimal Normalized Detection Cost Rate (min NDCR) by 23 % and improves localization accuracy by 24 %.
8c15753cbb921f1b0ce4cd09b83415152212dbef
More than Just Two Sexes: The Neural Correlates of Voice Gender Perception in Gender Dysphoria
Gender dysphoria (also known as "transsexualism") is characterized as a discrepancy between anatomical sex and gender identity. Research points towards neurobiological influences. Due to the sexually dimorphic characteristics of the human voice, voice gender perception provides a biologically relevant function, e.g. in the context of mating selection. There is evidence for a better recognition of voices of the opposite sex and a differentiation of the sexes in its underlying functional cerebral correlates, namely the prefrontal and middle temporal areas. This fMRI study investigated the neural correlates of voice gender perception in 32 male-to-female gender dysphoric individuals (MtFs) compared to 20 non-gender dysphoric men and 19 non-gender dysphoric women. Participants indicated the sex of 240 voice stimuli modified in semitone steps in the direction to the other gender. Compared to men and women, MtFs showed differences in a neural network including the medial prefrontal gyrus, the insula, and the precuneus when responding to male vs. female voices. With increased voice morphing men recruited more prefrontal areas compared to women and MtFs, while MtFs revealed a pattern more similar to women. On a behavioral and neuronal level, our results support the feeling of MtFs reporting they cannot identify with their assigned sex.
9281495c7ffc4d6d6e5305281c200f9b02ba70db
Security and compliance challenges in complex IT outsourcing arrangements: A multi-stakeholder perspective
Complex IT outsourcing arrangements promise numerous benefits such as increased cost predictability and reduced costs, higher flexibility and scalability upon demand. Organizations trying to realize these benefits, however, face several security and compliance challenges. In this article, we investigate the pressure to take action with respect to such challenges and discuss avenues toward promising responses. We collected perceptions on security and compliance challenges from multiple stakeholders by means of a series of interviews and an online survey, first, to analyze the current and future relevance of the challenges as well as potential adverse effects on organizational performance and, second, to discuss the nature and scope of potential responses. The survey participants confirmed the current and future relevance of the six challenges auditing clouds, managing heterogeneity of services, coordinating involved parties, managing relationships between clients and vendors, localizing and migrating data and coping with lack of security awareness. Additionally, they perceived these challenges as affecting organizational performance adversely in case they are not properly addressed. Responses in form of organizational measures were considered more promising than technical ones concerning all challenges except localizing and migrating data, for which the opposite was true. Balancing relational and contractual governance as well as employing specific client and vendor capabilities is essential for the success of IT outsourcing arrangements, yet do not seem sufficient to overcome the investigated challenges. Innovations connecting the technical perspective of utility software with the business perspective of application software relevant for security and compliance management, however, nourish the hope that the benefits associated with complex IT outsourcing arrangements can be realized in the foreseeable future whilst addressing the security and compliance challenges. a 2013 Elsevier Ltd. All rights reserved. 61. .fraunhofer.de (D. Bachlechner), stefan.thalmann@uibk.ac.at (S. Thalmann), ronald.maier@ ier Ltd. All rights reserved. c om p u t e r s & s e c u r i t y 4 0 ( 2 0 1 4 ) 3 8e5 9 39
919fa5c3a4f9c3c1c7ba407ccbac8ab72ba68566
Detection of high variability in gene expression from single-cell RNA-seq profiling
The advancement of the next-generation sequencing technology enables mapping gene expression at the single-cell level, capable of tracking cell heterogeneity and determination of cell subpopulations using single-cell RNA sequencing (scRNA-seq). Unlike the objectives of conventional RNA-seq where differential expression analysis is the integral component, the most important goal of scRNA-seq is to identify highly variable genes across a population of cells, to account for the discrete nature of single-cell gene expression and uniqueness of sequencing library preparation protocol for single-cell sequencing. However, there is lack of generic expression variation model for different scRNA-seq data sets. Hence, the objective of this study is to develop a gene expression variation model (GEVM), utilizing the relationship between coefficient of variation (CV) and average expression level to address the over-dispersion of single-cell data, and its corresponding statistical significance to quantify the variably expressed genes (VEGs). We have built a simulation framework that generated scRNA-seq data with different number of cells, model parameters, and variation levels. We implemented our GEVM and demonstrated the robustness by using a set of simulated scRNA-seq data under different conditions. We evaluated the regression robustness using root-mean-square error (RMSE) and assessed the parameter estimation process by varying initial model parameters that deviated from homogeneous cell population. We also applied the GEVM on real scRNA-seq data to test the performance under distinct cases. In this paper, we proposed a gene expression variation model that can be used to determine significant variably expressed genes. Applying the model to the simulated single-cell data, we observed robust parameter estimation under different conditions with minimal root mean square errors. We also examined the model on two distinct scRNA-seq data sets using different single-cell protocols and determined the VEGs. Obtaining VEGs allowed us to observe possible subpopulations, providing further evidences of cell heterogeneity. With the GEVM, we can easily find out significant variably expressed genes in different scRNA-seq data sets.
d4caec47eeabb2eca3ce9e39b1fae5424634c731
Design and control of underactuated tendon-driven mechanisms
Many robotic hands or prosthetic hands have been developed in the last several decades, and many use tendon-driven mechanisms for their transmissions. Robotic hands are now built with underactuated mechanisms, which have fewer actuators than degrees of freedom, to reduce mechanical complexity or to realize a biomimetic motion such as flexion of an index finger. The design is heuristic and it is useful to develop design methods for the underactuated mechanisms. This paper classifies mechanisms driven by tendons into three classes, and proposes a design method for them. The two classes are related to underactuated tendon-driven mechanisms, and these have been used without distinction so far. An index finger robot, which has four active tendons and two passive tendons, is developed and controlled with the proposed method.
8d6ca2dae1a6d1e71626be6167b9f25d2ce6dbcc
Semi-Supervised Learning with the Deep Rendering Mixture Model
Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning. Deep Convolutional Networks (DCNs) have achieved great success in supervised tasks and as such have been widely employed in the semi-supervised learning. In this paper we leverage the recently developed Deep Rendering Mixture Model (DRMM), a probabilistic generative model that models latent nuisance variation, and whose inference algorithm yields DCNs. We develop an EM algorithm for the DRMM to learn from both labeled and unlabeled data. Guided by the theory of the DRMM, we introduce a novel nonnegativity constraint and a variational inference term. We report state-of-the-art performance on MNIST and SVHN and competitive results on CIFAR10. We also probe deeper into how a DRMM trained in a semi-supervised setting represents latent nuisance variation using synthetically rendered images. Taken together, our work provides a unified framework for supervised, unsupervised, and semisupervised learning.
9bfc34ca3d3dd17ecdcb092f2a056da6cb824acd
Visual analytics of spatial interaction patterns for pandemic decision support
This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Population mobility, i.e. the movement and contact of individuals across geographic space, is one of the essential factors that determine the course of a pandemic disease spread. This research views both individual-based daily activities and a pandemic spread as spatial interaction problems, where locations interact with each other via the visitors that they share or the virus that is transmitted from one place to another. The research proposes a general visual analytic approach to synthesize very large spatial interaction data and discover interesting (and unknown) patterns. The proposed approach involves a suite of visual and computational techniques, including (1) a new graph partitioning method to segment a very large interaction graph into a moderate number of spatially contiguous subgraphs (regions); (2) a reorderable matrix, with regions 'optimally' ordered on the diagonal, to effectively present a holistic view of major spatial interaction patterns; and (3) a modified flow map, interactively linked to the reorderable matrix, to enable pattern interpretation in a geographical context. The implemented system is able to visualize both people's daily movements and a disease spread over space in a similar way. The discovered spatial interaction patterns provide valuable insight for designing effective pandemic mitigation strategies and supporting decision-making in time-critical situations.
0428c79e5be359ccd13d63205b5e06037404967b
On Bayesian Upper Confidence Bounds for Bandit Problems
Stochastic bandit problems have been analyzed from two different perspectives: a frequentist view, where the parameter is a deterministic unknown quantity, and a Bayesian approach, where the parameter is drawn from a prior distribution. We show in this paper that methods derived from this second perspective prove optimal when evaluated using the frequentist cumulated regret as a measure of performance. We give a general formulation for a class of Bayesian index policies that rely on quantiles of the posterior distribution. For binary bandits, we prove that the corresponding algorithm, termed BayesUCB, satisfies finite-time regret bounds that imply its asymptotic optimality. More generally, Bayes-UCB appears as an unifying framework for several variants of the UCB algorithm addressing different bandit problems (parametric multi-armed bandits, Gaussian bandits with unknown mean and variance, linear bandits). But the generality of the Bayesian approach makes it possible to address more challenging models. In particular, we show how to handle linear bandits with sparsity constraints by resorting to Gibbs sampling.
e030aa1ea57ee47d3f3a0ce05b7e983f95115f1a
Psychometric Properties of Physical Activity and Leisure Motivation Scale in Farsi: an International Collaborative Project on Motivation for Physical Activity and Leisure.
BACKGROUND Given the importance of regular physical activity, it is crucial to evaluate the factors favoring participation in physical activity. We aimed to report the psychometric analysis of the Farsi version of the Physical Activity and Leisure Motivation Scale (PALMS). METHODS The Farsi version of PALMS was completed by 406 healthy adult individuals to test its factor structure and concurrent validity and reliability. RESULTS Conducting the exploratory factor analysis revealed nine factors that accounted for 64.6% of the variances. The PALMS reliability was supported with a high internal consistency of 0.91 and a high test-retest reliability of 0.97 (95% CI: 0.97-0.98). The association between the PALMS and its previous version Recreational Exercise Motivation Measure scores was strongly significant (r= 0.86, P < 0.001). CONCLUSION We have shown that the Farsi version of the PALMS appears to be a valuable instrument to measure motivation for physical activity and leisure.
6570489a6294a5845adfd195a50a226f78a139c1
An extended online purchase intention model for middle-aged online users
This article focuses on examining the determinants and mediators of the purchase intention of nononline purchasers between ages 31 and 60 who mostly have strong purchasing power. It propose anew online purchase intention model by integrating the technology acceptance model with additional determinants and adding habitual online usage as a new mediator. Based on a sample of more than 300 middle-aged non-online purchasers, beyond some situationally-specific predictor variables, online purchasing attitude and habitual online usage are key mediators. Personal awareness of security only affects habitual online usage, t indicating a concern of middle-aged users. Habitual online usage is a
1eb92d883dab2bc6a408245f4766f4c5d52f7545
Maximum Complex Task Assignment: Towards Tasks Correlation in Spatial Crowdsourcing
Spatial crowdsourcing has gained emerging interest from both research communities and industries. Most of current spatial crowdsourcing frameworks assume independent and atomic tasks. However, there could be some cases that one needs to crowdsource a spatial complex task which consists of some spatial sub-tasks (i.e., tasks related to a specific location). The spatial complex task's assignment requires assignments of all of its sub-tasks. The currently available frameworks are inapplicable to such kind of tasks. In this paper, we introduce a novel approach to crowdsource spatial complex tasks. We first formally define the Maximum Complex Task Assignment (MCTA) problem and propose alternative solutions. Subsequently, we perform various experiments using both real and synthetic datasets to investigate and verify the usability of our proposed approach.
44298a4cf816fe8d55c663337932724407ae772b
A survey on policy search algorithms for learning robot controllers in a handful of trials
Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word “big-data”, we refer to this challenge as “micro-data reinforcement learning”. We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time.
35318f1dcc88c8051911ba48815c47d424626a92
Visual Analysis of TED Talk Topic Trends
TED Talks are short, powerful talks given by some of the world's brightest minds - scientists, philanthropists, businessmen, artists, and many others. Funded by members and advertising, these talks are free to access by the public on the TED website and TED YouTube channel, and many videos have become viral phenomena. In this research project, we perform a visual analysis of TED Talk videos and playlists to gain a good understanding of the trends and relationships between TED Talk topics.
a42569c671b5f9d0fe2007af55199d668dae491b
Fine-grained Concept Linking using Neural Networks in Healthcare
To unlock the wealth of the healthcare data, we often need to link the real-world text snippets to the referred medical concepts described by the canonical descriptions. However, existing healthcare concept linking methods, such as dictionary-based and simple machine learning methods, are not effective due to the word discrepancy between the text snippet and the canonical concept description, and the overlapping concept meaning among the fine-grained concepts. To address these challenges, we propose a Neural Concept Linking (NCL) approach for accurate concept linking using systematically integrated neural networks. We call the novel neural network architecture as the COMposite AttentIonal encode-Decode neural network (COM-AID). COM-AID performs an encode-decode process that encodes a concept into a vector and decodes the vector into a text snippet with the help of two devised contexts. On the one hand, it injects the textual context into the neural network through the attention mechanism, so that the word discrepancy can be overcome from the semantic perspective. On the other hand, it incorporates the structural context into the neural network through the attention mechanism, so that minor concept meaning differences can be enlarged and effectively differentiated. Empirical studies on two real-world datasets confirm that the NCL produces accurate concept linking results and significantly outperforms state-of-the-art techniques.
30f2b6834d6f2322da204f36ad24ddf43cc45d33
Structural XML Classification in Concept Drifting Data Streams
Classification of large, static collections of XML data has been intensively studied in the last several years. Recently however, the data processing paradigm is shifting from static to streaming data, where documents have to be processed online using limited memory and class definitions can change with time in an event called concept drift. As most existing XML classifiers are capable of processing only static data, there is a need to develop new approaches dedicated for streaming environments. In this paper, we propose a new classification algorithm for XML data streams called XSC. The algorithm uses incrementally mined frequent subtrees and a tree-subtree similarity measure to classify new documents in an associative manner. The proposed approach is experimentally evaluated against eight state-of-the-art stream classifiers on real and synthetic data. The results show that XSC performs significantly better than competitive algorithms in terms of accuracy and memory usage.
14829636fee5a1cf8dee9737849a8e2bdaf9a91f
Bitter to Better - How to Make Bitcoin a Better Currency
Bitcoin is a distributed digital currency which has attracted a substantial number of users. We perform an in-depth investigation to understand what made Bitcoin so successful, while decades of research on cryptographic e-cash has not lead to a large-scale deployment. We ask also how Bitcoin could become a good candidate for a long-lived stable currency. In doing so, we identify several issues and attacks of Bitcoin, and propose suitable techniques to address them.
35fe18606529d82ce3fc90961dd6813c92713b3c
SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies
Bit coin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bit coin grew to comprise billions of dollars of economic value despite only cursory analysis of the system's design. Since then a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. Meanwhile a large and vibrant open-source community has proposed and deployed numerous modifications and extensions. We provide the first systematic exposition Bit coin and the many related crypto currencies or 'altcoins.' Drawing from a scattered body of knowledge, we identify three key components of Bit coin's design that can be decoupled. This enables a more insightful analysis of Bit coin's properties and future stability. We map the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We survey anonymity issues in Bit coin and provide an evaluation framework for analyzing a variety of privacy-enhancing proposals. Finally we provide new insights on what we term disinter mediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disinter mediation strategies and provide a detailed comparison.
3d16ed355757fc13b7c6d7d6d04e6e9c5c9c0b78
Majority Is Not Enough: Bitcoin Mining Is Vulnerable
5e86853f533c88a1996455d955a2e20ac47b3878
Information propagation in the Bitcoin network
Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior.
5fb1285e05bbd78d0094fe8061c644ea09d9da8d
Double-spending fast payments in bitcoin
Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to verify payments. Nowadays, Bitcoin is increasingly used in a number of fast payment scenarios, where the time between the exchange of currency and goods is short (in the order of few seconds). While the Bitcoin payment verification scheme is designed to prevent double-spending, our results show that the system requires tens of minutes to verify a transaction and is therefore inappropriate for fast payments. An example of this use of Bitcoin was recently reported in the media: Bitcoins were used as a form of \emph{fast} payment in a local fast-food restaurant. Until now, the security of fast Bitcoin payments has not been studied. In this paper, we analyze the security of using Bitcoin for fast payments. We show that, unless appropriate detection techniques are integrated in the current Bitcoin implementation, double-spending attacks on fast payments succeed with overwhelming probability and can be mounted at low cost. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast payments are not always effective in detecting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we propose and implement a modification to the existing Bitcoin implementation that ensures the detection of double-spending attacks against fast payments.
d2920567fb66bc69d92ab2208f6455e37ce6138b
Disruptive Innovation : Removing the Innovators ’ Dilemma
The objectives of this research are to co-create understanding and knowledge on the phenomenon of disruptive innovation in order to provide pragmatic clarity on the term’s meaning, impact and implications. This will address the academic audience’s gap in knowledge and provide help to practitioners wanting to understand how disruptive innovation can be fostered as part of a major competitive strategy. This paper reports on the first eighteen months of a three year academic and industrial investigation. It presents a new pragmatic definition drawn from the literature and an overview of the conceptual framework for disruptive innovation that was co-created via the collaborative efforts of academia and industry. The barriers to disruptive innovation are presented and a best practice case study of how one company is overcoming these barriers is described. The remainder of the research, which is supported by a European Commission co-sponsored project called Disrupt-it, will focus on developing and validating tools to help overcome these barriers. Thomond, P., Herzberg, T. and Lettice, F. (2003). "Disruptive Innovation: Removing the Innovators’ Dilemma". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 2 1.0. Introduction and Background. In his ground breaking book “The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail”, Clayton Christensen first coined the phrase ‘disruptive technologies’. He showed that time and again almost all the organisations that have ‘died’ or been displaced from their industries because of a new paradigm of customer offering could see the disruption coming but did nothing until it was too late (Christensen, 1997). They assess the new approaches or technologies and frame them as either deficient or as an unlikely threat much to the managers’ regret and the organisation’s demise (Christensen 2002). In the early 1990s, major airlines such as British Airways decided that the opportunities afforded by a low-cost, point-to point no frills strategy such as that introduced by the newly formed Ryanair was an unlikely threat. By the mid-1990’s other newcomers such as easyJet had embraced Ryanair’s foresight and before long, the ‘low cost’ approach had captured a large segment of the market. Low-cost no frills proved a hit with European travellers but not with the established airlines who had either ignored the threat or failed to capitalise on the approach. Today DVD technology and Charles Schwab are seen to be having a similar impact upon the VHS industry and Merrill Lynch respectively, however, disruption is not just a recent phenomenon it has firm foundations as a trend in the past that will also inevitably occur in the future. Examples of past disruptive innovations would include the introduction of the telegraph and its impact upon businesses like Pony Express and the transistor’s impact upon the companies that produced cathode ray tubes. Future predictions include the impact of Light Emitting Diode’ (L.E.D.) technology and its potential to completely disrupt the traditional light bulb sector and its supporting industries. More optimistically, Christensen (2002) further shows that the process of disruptive innovation has been one of the fundamental causal mechanisms through which access to life improving products and services has been increased and the basis on which long term organisational survival could be ensured (Christensen, 1997). In spite of the proclaimed importance of disruptive innovation and the ever increasing interest from both the business and academic press alike, there still appears to be a disparity between rhetoric and reality. To date, the multifaceted and interrelated issues of disruptive innovation have not been investigated in depth. The phenomenon with examples has been described by a number of authors (Christensen, 1997, Moore, 1995 Gilbert and Bower, 2002) and practitioner orientated writers have begun to offer strategies for responding to disruptive Thomond, P., Herzberg, T. and Lettice, F. (2003). "Disruptive Innovation: Removing the Innovators’ Dilemma". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 3 change (Charitou and Markides, 2003, Rigby and Corbett, 2002, Rafi and Kampas, 2002). However, a deep integrated understanding of the entire subject is missing. In particular, there is an industrial need and academic gap knowledge in the pragmatic comprehension of how organisations can understand and foster disruptive innovation as part of a major competitive strategy. The objectives of this research are to co-create understanding and knowledge on the phenomenon of disruptive innovation in order to provide pragmatic clarity on the term’s meaning, impact and implications. This will address the academic audience’s gap in knowledge and provide help to practitioners wanting to understand how disruptive innovation can be fostered as part of a major competitive strategy. The current paper reports on the first eighteen months of a three year academic and industrial investigation. It presents a new pragmatic definition drawn from the literature and an overview of the conceptual framework for disruptive innovation that was co-created via the collaborative efforts of academia and industry. The barriers to disruptive innovation are presented and a best practice case study of how one company is overcoming these barriers is described. The research contributes to “Disrupt-it”, a €3million project for the Information Society Technologies Commission under the 5th Framework Program of the European Union, which will focus on developing and validating tools to help organisations foster disruptive innovation. 2.0 Understanding the Phenomenon of Disruptive Innovation. ‘Disruptive Innovation’, ‘Disruptive Technologies’ and ‘Disruptive Business Strategies’ are emerging and increasingly prominent business terms that are used to describe a form of revolutionary change. They are receiving ever more academic and industrial attention, yet these terms are still poorly defined and not well understood. A key objective of this research is to improve the understanding of disruptive innovation by drawing together multiple perspectives on the topic, as shown in Figure 1, into a more holistic and comprehensive definition. Much of the past investigation into discontinuous and disruptive innovation has been path dependent upon the researchers’ investigative history. For example, Hamel’s strategy background leads him to see disruptive innovation through the lens of the ‘business model’; whereas Christensen’s technologically orientated past leads to a focus on ‘disruptive technologies’. What many researchers share is the view that firms need to periodically engage in the process of revolutionary change for long-term survival and this is not a new Thomond, P., Herzberg, T. and Lettice, F. (2003). "Disruptive Innovation: Removing the Innovators’ Dilemma". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 4 phenomenon (Christensen, 1997; Christensen and Rosenbloom, 1995; Hamel, 2000; Schumpeter, 1975, Tushman and Anderson, 1986; Tushman and Nadler, 1986, Gilbert and Bower, 2002; Rigby and Corbett, 2002; Charitou and Markides, 2003; Foster and Kaplan, 2001; Thomond and Lettice, 2002). Disruptive innovation has also been defined as “a technology, product or process that creeps up from below an existing business and threatens to displace it. Typically, the disrupter offers lower performance and less functionality... The product or process is good enough for a meaningful number of customers – indeed some don’t buy the older version’s higher functionality and welcome the disruption’s simplicity. And gradually, the new product or process improves to the point where it displaces the incumbent.” (Rafi and Kampas p 8, 2002). This definition borrows heavily from the work of Christensen (1997), which in turn has some of its origins in the findings of Dosi (1982). For example, each of the cases of disruptive innovation mentioned thus far represents a new paradigm of customer offering. Dosi (1982) claims that these can be represented as discontinuities in trajectories of progress as defined within earlier paradigms where a technological paradigm is a pattern of solutions for selected technological problems. In fact, new paradigms redefine the future meaning of progress and a new class of problems becomes the target of normal incremental innovation (Dosi, 1982). Therefore, disruptive innovations appear to typify a particular type of ‘discontinuous innovation’ (a term which has received much more academic attention). The same characteristics are found, except that disruptive innovations first establish their commercial footing in new or simple market niches by enabling customers to do things that only specialists could do before (e.g. low cost European airlines are opening up air travel to thousands that did not fly before) and that these new offerings, through a period of exploitation, migrate upmarket and eventually redefine the paradigms and value propositions on which the existing industry is based (Christensen, 1997, 2002; Moore, 1995; Business Model Charitou & Markides, 2003 Hamel, 2000
5e90e57fccafbc78ecbac1a78c546b7db9a468ce
Finding new terminology in very large corpora
Most technical and scientific terms are comprised of complex, multi-word noun phrases but certainly not all noun phrases are technical or scientific terms. The distinction of specific terminology from common non-specific noun phrases can be based on the observation that terms reveal a much lesser degree of distributional variation than non-specific noun phrases. We formalize the limited paradigmatic modifiability of terms and, subsequently, test the corresponding algorithm on bigram, trigram and quadgram noun phrases extracted from a 104-million-word biomedical text corpus. Using an already existing and community-wide curated biomedical terminology as an evaluation gold standard, we show that our algorithm significantly outperforms standard term identification measures and, therefore, qualifies as a high-performant building block for any terminology identification system. We also provide empirical evidence that the superiority of our approach, beyond a 10-million-word threshold, is essentially domain- and corpus-size-independent.
991891e3aa226766dcb4ad7221045599f8607685
Review of axial flux induction motor for automotive applications
Hybrid and electric vehicles have been the focus of many academic and industrial studies to reduce transport pollution; they are now established products. In hybrid and electric vehicles, the drive motor should have high torque density, high power density, high efficiency, strong physical structure and variable speed range. An axial flux induction motor is an interesting solution, where the motor is a double sided axial flux machine. This can significantly increase torque density. In this paper a review of the axial flux motor for automotive applications, and the different possible topologies for the axial field motor, are presented.
11b111cbe79e5733fea28e4b9ff99fe7b4a4585c
Generalized vulnerability extrapolation using abstract syntax trees
The discovery of vulnerabilities in source code is a key for securing computer systems. While specific types of security flaws can be identified automatically, in the general case the process of finding vulnerabilities cannot be automated and vulnerabilities are mainly discovered by manual analysis. In this paper, we propose a method for assisting a security analyst during auditing of source code. Our method proceeds by extracting abstract syntax trees from the code and determining structural patterns in these trees, such that each function in the code can be described as a mixture of these patterns. This representation enables us to decompose a known vulnerability and extrapolate it to a code base, such that functions potentially suffering from the same flaw can be suggested to the analyst. We evaluate our method on the source code of four popular open-source projects: LibTIFF, FFmpeg, Pidgin and Asterisk. For three of these projects, we are able to identify zero-day vulnerabilities by inspecting only a small fraction of the code bases.
0dbed89ea3296f351eb986cc02678c7a33d50945
A Combinatorial Noise Model for Quantum Computer Simulation
Quantum computers (QCs) have many potential hardware implementations ranging from solid-state silicon-based structures to electron-spin qubits on liquid helium. However, all QCs must contend with gate infidelity and qubit state decoherence over time. Quantum error correcting codes (QECCs) have been developed to protect program qubit states from such noise. Previously, Monte Carlo noise simulators have been developed to model the effectiveness of QECCs in combating decoherence. The downside to this random sampling approach is that it may take days or weeks to produce enough samples for an accurate measurement. We present an alternative noise modeling approach that performs combinatorial analysis rather than random sampling. This model tracks the progression of the most likely error states of the quantum program through its course of execution. This approach has the potential for enormous speedups versus the previous Monte Carlo methodology. We have found speedups with the combinatorial model on the order of 100X-1,000X over the Monte Carlo approach when analyzing applications utilizing the [[7,1,3]] QECC. The combinatorial noise model has significant memory requirements, and we analyze its scaling properties relative to the size of the quantum program. Due to its speedup, this noise model is a valuable alternative to traditional Monte Carlo simulation.
47f0455d65a0823c70ce7cce9749f3abd826e0a7
Random Walk with Restart on Large Graphs Using Block Elimination
Given a large graph, how can we calculate the relevance between nodes fast and accurately? Random walk with restart (RWR) provides a good measure for this purpose and has been applied to diverse data mining applications including ranking, community detection, link prediction, and anomaly detection. Since calculating RWR from scratch takes a long time, various preprocessing methods, most of which are related to inverting adjacency matrices, have been proposed to speed up the calculation. However, these methods do not scale to large graphs because they usually produce large dense matrices that do not fit into memory. In addition, the existing methods are inappropriate when graphs dynamically change because the expensive preprocessing task needs to be computed repeatedly. In this article, we propose Bear, a fast, scalable, and accurate method for computing RWR on large graphs. Bear has two versions: a preprocessing method BearS for static graphs and an incremental update method BearD for dynamic graphs. BearS consists of the preprocessing step and the query step. In the preprocessing step, BearS reorders the adjacency matrix of a given graph so that it contains a large and easy-to-invert submatrix, and precomputes several matrices including the Schur complement of the submatrix. In the query step, BearS quickly computes the RWR scores for a given query node using a block elimination approach with the matrices computed in the preprocessing step. For dynamic graphs, BearD efficiently updates the changed parts in the preprocessed matrices of BearS based on the observation that only small parts of the preprocessed matrices change when few edges are inserted or deleted. Through extensive experiments, we show that BearS significantly outperforms other state-of-the-art methods in terms of preprocessing and query speed, space efficiency, and accuracy. We also show that BearD quickly updates the preprocessed matrices and immediately computes queries when the graph changes.
239222aead65a66be698036d04e4af6eaa24b77b
An energy-efficient unequal clustering mechanism for wireless sensor networks
Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, to distribute the energy consumption among nodes in each cluster and extend the network lifetime. However, they rarely consider the hot spots problem in multihop wireless sensor networks. When cluster heads cooperate with each other to forward their data to the base station, the cluster heads closer to the base station are burdened with heavy relay traffic and tend to die early, leaving areas of the network uncovered and causing network partition. To address the problem, we propose an energy-efficient unequal clustering (EEUC) mechanism for periodical data gathering in wireless sensor networks. It partitions the nodes into clusters of unequal size, and clusters closer to the base station have smaller sizes than those farther away from the base station. Thus cluster heads closer to the base station can preserve some energy for the inter-cluster data forwarding. We also propose an energy-aware multihop routing protocol for the inter-cluster communication. Simulation results show that our unequal clustering mechanism balances the energy consumption well among all sensor nodes and achieves an obvious improvement on the network lifetime
d19f938c790f0ffd8fa7fccc9fd7c40758a29f94
Art-Bots: Toward Chat-Based Conversational Experiences in Museums
7a5ae36df3f08df85dfaa21fead748f830d5e4fa
Learning Bound for Parameter Transfer Learning
We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task. Then, we introduce the notion of the local stability and parameter transfer learnability of parametric feature mapping, and thereby derive a learning bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in selftaught learning. Although self-taught learning algorithms with plentiful unlabeled data often show excellent empirical performance, their theoretical analysis has not been studied. In this paper, we also provide the first theoretical learning bound for self-taught learning.
2b695f4060e78f9977a3da1c01a07a05a3f94b28
Analyzing Posture and Affect in Task-Oriented Tutoring
Intelligent tutoring systems research aims to produce systems that meet or exceed the effectiveness of one on one expert human tutoring. Theory and empirical study suggest that affective states of the learner must be addressed to achieve this goal. While many affective measures can be utilized, posture offers the advantages of non intrusiveness and ease of interpretation. This paper presents an accurate posture estimation algorithm applied to a computer mediated tutoring corpus of depth recordings. Analyses of posture and session level student reports of engagement and cognitive load identified significant patterns. The results indicate that disengagement and frustration may coincide with closer postural positions and more movement, while focused attention and less frustration occur with more distant, stable postural positions. It is hoped that this work will lead to intelligent tutoring systems that recognize a greater breadth of affective expression through channels of posture and gesture.
c28bcaab43e57b9b03f09fd2237669634da8a741
Contributions of the prefrontal cortex to the neural basis of human decision making
The neural basis of decision making has been an elusive concept largely due to the many subprocesses associated with it. Recent efforts involving neuroimaging, neuropsychological studies, and animal work indicate that the prefrontal cortex plays a central role in several of these subprocesses. The frontal lobes are involved in tasks ranging from making binary choices to making multi-attribute decisions that require explicit deliberation and integration of diverse sources of information. In categorizing different aspects of decision making, a division of the prefrontal cortex into three primary regions is proposed. (1) The orbitofrontal and ventromedial areas are most relevant to deciding based on reward values and contribute affective information regarding decision attributes and options. (2) Dorsolateral prefrontal cortex is critical in making decisions that call for the consideration of multiple sources of information, and may recruit separable areas when making well defined versus poorly defined decisions. (3) The anterior and ventral cingulate cortex appear especially relevant in sorting among conflicting options, as well as signaling outcome-relevant information. This topic is broadly relevant to cognitive neuroscience as a discipline, as it generally comprises several aspects of cognition and may involve numerous brain regions depending on the situation. The review concludes with a summary of how these regions may interact in deciding and possible future research directions for the field.
3ec40e4f549c49b048cd29aeb0223e709abc5565
Image-based Airborne LiDAR Point Cloud Encoding for 3 D Building Model Retrieval
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.
2433254a9df37729159daa5eeec56123e122518e
THE ROLE OF DIGITAL AND SOCIAL MEDIA MARKETING IN CONSUMER BEHAVIOR
This article reviews recently published research about consumers in digital and social media marketing settings. Five themes are identified: (i) consumer digital culture, (ii) responses to digital advertising, (iii) effects of digital environments on consumer behavior, (iv) mobile environments, and (v) online word of mouth (WOM). Collectively these articles shed light from many different angles on how consumers experience, influence, and are influenced by the digital environments in which they are situated as part of their daily lives. Much is still to be understood, and existing knowledge tends to be disproportionately focused on WOM, which is only part of the digital consumer experience. Several directions for future research are advanced to encourage researchers to consider a broader range of phenomena.
399bc455dcbaf9eb0b4144d0bc721ac4bb7c8d59
A Spreadsheet Algebra for a Direct Data Manipulation Query Interface
A spreadsheet-like "direct manipulation" interface is more intuitive for many non-technical database users compared to traditional alternatives, such as visual query builders. The construction of such a direct manipulation interfacemay appear straightforward, but there are some significant challenges. First, individual direct manipulation operations cannot be too complex, so expressive power has to be achieved through composing (long) sequences of small operations. Second, all intermediate results are visible to the user, so grouping and ordering are material after every small step. Third, users often find the need to modify previously specified queries. Since manipulations are specified one step at a time, there is no actual queryexpression to modify. Suitable means must be provided to address this need. Fourth, the order in which manipulations are performed by the user should not affect the results obtained, to avoid user confusion. We address the aforementioned challenges by designing a new spreadsheet algebra that: i) operates on recursively grouped multi-sets, ii) contains a selectively designed set of operators capable of expressing at least all single-block SQL queries and can be intuitively implemented in a spreadsheet, iii) enables query modification by the notion of modifiable query state, and iv) requires no ordering in unary data manipulation operators since they are all designed to commute. We built a prototype implementation of the spreadsheet algebra and show, through user studies with non-technical subjects, that the resultant query interface is easier to use than a standard commercial visual query builder.
1eff385c88fd1fdd1c03fd3fb573de2530b73f99
OBJECTIVE SELF-AWARENESS THEORY : RECENT PROGRESS AND ENDURING PROBLEMS By :
Objective self-awareness theory has undergone fundamental changes in the 3 decades since Duval and Wicklund's (1972) original formulation. We review new evidence that bears on the basic tenets of the theory. Many of the assumptions of self-awareness theory require revision, particularly how expectancies influence approach and avoidance of self-standard discrepancies; the nature of standards, especially when they are changed; and the role of causal attribution in directing discrepancy reduction. However, several unresolved conceptual issues remain; future theoretical and empirical directions are discussed. Article: The human dilemma is that which arises out of a man's capacity to experience himself as both subject and object at the same time. Both are necessary--for the science of psychology, for therapy, and for gratifying living. (May, 1967, p. 8) Although psychological perspectives on the self have a long history (e.g., Cooley, 1902; James, 1890; Mead, 1934), experimental research on the self has emerged only within the last 40 years. One of the earliest "self theories" was objective self-awareness (OSA) theory (Duval & Wicklund, 1972). OSA theory was concerned with the self-reflexive quality of the consciousness. Just as people can apprehend the existence of environmental stimuli, they can be aware of their own existence: "When attention is directed inward and the individual's consciousness is focused on himself, he is the object of his own consciousness--hence 'objective' self awareness" (Duval & Wicklund, 1972, p. 2). This is contrasted with "subjective self-awareness" that results when attention is directed away from the self and the person "experiences himself as the source of perception and action" (Duval & Wicklund, 1972, p. 3). By this, Duval and Wicklund (1972,chap. 3) meant consciousness of one's existence on an organismic level, in which such existence is undifferentiated as a separate and distinct object in the world. OSA theory has stimulated a lot of research and informed basic issues in social psychology, such as emotion (Scheier & Carver, 1977), attribution (Duval & Wicklund, 1973), attitude--behavior consistency (Gibbons, 1983), self-standard comparison (Duval & Lalwani, 1999), prosocial behavior (Froming, Nasby, & McManus, 1998), deindividuation (Diener, 1979), stereotyping (Macrae, Bodenhausen, & Milne, 1998), self-assessment (Silvia & Gendolla, in press), terror management (Arndt, Greenberg, Simon, Pyszczynski, & Solomon, 1998; Silvia, 2001), and group dynamics (Duval, 1976; Mullen, 1983). Self-focused attention is also fundamental to a host of clinical and health phenomena (Hull, 1981; Ingram, 1990; Pyszczynski, Hamilton, Greenberg, & Becker, 1991; Wells & Matthews, 1994). The study of self-focused attention continues to be a dynamic and active research area. A lot of research relevant to basic theoretical issues has been conducted since the last maj or review (Gibbons, 1990). Recent research has made progress in understanding links between self-awareness and causal attribution, the effects of expectancies on self-standard discrepancy reduction, and the nature of standards--the dynamics of selfawareness are now viewed quite differently. We review these recent developments[1] and hope that a conceptual integration of new findings will further stimulate research on self-focused attention. However, there is still much conceptual work left to be done, and many basic issues remain murky and controversial. We discuss these unresolved issues and sketch the beginnings of some possible solutions. Original Theory The original statement of OSA theory (Duval & Wicklund, 1972) employed only a few constructs, relations, and processes. The theory assumed that the orientation of conscious attention was the essence of selfevaluation. Focusing attention on the self brought about objective self-awareness, which initiated an automatic comparison of the self against standards. The self was defined very broadly as the person's knowledge of the person. A standard was "defined as a mental representation of correct behavior, attitudes, and traits ... All of the standards of correctness taken together define what a 'correct' person is" (Duval & Wicklund, 1972, pp. 3, 4). This simple system consisting of self, standards, and attentional focus was assumed to operate according to gestalt consistency principles (Heider, 1960). If a discrepancy was found between self and standards, negative affect was said to arise. This aversive state then motivated the restoration of consistency. Two behavioral routes were proposed. People could either actively change their actions, attitudes, or traits to be more congruent with the representations of the standard or could avoid the self-focusing stimuli and circumstances. Avoidance effectively terminates the comparison process and hence all self-evaluation. Early research found solid support for these basic ideas (Carver, 1975; Gibbons & Wicklund, 1976; Wicklund & Duval, 1971). Duval and Wicklund (1972) also assumed that objective self-awareness would generally be an aversive state--the probability that at least one self-standard discrepancy exists is quite high. This was the first assumption to be revised. Later work found that self-awareness can be a positive state when people are congruent with their standards (Greenberg & Musham, 1981; Ickes, Wicklund, & Ferris, 1973). New Developments OSA theory has grown considerably from the original statement (Duval & Wicklund, 1972). Our review focuses primarily on core theoretical developments since the last review (Gibbons, 1990). Other interesting aspects, such as interpersonal processes and interoceptive accuracy, have not changed significantly since previous reviews (Gibbons, 1990; Silvia & Gendolla, in press). We will also overlook the many clinical consequences of self-awareness; these have been exhaustively reviewed elsewhere (Pyszczynski et al., 1991; Wells & Matthews, 1994). Expectancies and Avoiding Self-Awareness Reducing a discrepancy or avoiding self-focus are equally effective ways of reducing the negative affect resulting from a discrepancy. When do people do one or the other? The original theory was not very specific about when approach versus avoidance would occur. Duval and Wicklund (1972) did, however, speculate that two factors should be relevant. The first was whether people felt they could effectively reduce the discrepancy; the second was whether the discrepancy was small or large. In their translation of OSA theory into a "test--operate--test--exit" (TOTE) feedback system, Carver, Blaney, and Scheier (1979a, 1979b) suggested that expectancies regarding outcome favorability determine approach versus avoidance behavior. When a self-standard discrepancy is recognized, people implicitly appraise their likelihood of reducing the discrepancy (cf. Bandura, 1977; Lazarus, 1966). If eventual discrepancy reduction is perceived as likely, people will try to achieve the standard. When expectations regarding improvement are unfavorable, however, people will try to avoid self-focus. Later research and theory (Duval, Duval, & Mulilis, 1992) refined Duval and Wicklund's (1972) speculations and the notion of outcome favorability. Expectancies are not simply and dichotomously favorable or unfavorable--they connote a person's rate of progress in discrepancy reduction relative to the magnitude of the discrepancy. More specifically, people will try to reduce a discrepancy to the extent they believe that their rate of progress is sufficient relative to the magnitude of the problem. Those who believe their rate of progress to be insufficient will avoid. To test this hypothesis, participants were told they were either highly (90%) or mildly (10%) discrepant from an experimental standard (Duval et al., 1992, Study 1). Participants were then given the opportunity to engage in a remedial task that was guaranteed by the experimenter to totally eliminate their deficiency provided that they worked on the task for 2 hr and 10 min. However, the rate at which working on the task would reduce the discrepancy was varied. In the low rate of progress conditions individuals were shown a performance curve indicating no progress until the last 20 min of working on the remedial task. During the last 20 min discrepancy was reduced to zero. In the constant rate of progress condition, participants were shown a performance curve in which progress toward discrepancy reduction began immediately and continued throughout such efforts with 30% of the deficiency being reduced in the first 30 min of activity and totally eliminated after 2 hr and 10 min. Results indicated that persons who believed the discrepancy to be mild and progress to be constant worked on the remedial task; those who perceived the problem to be mild but the rate of progress to be low avoided this activity. However, participants who thought that the discrepancy was substantial and the rate of progress only constant avoided working on the remedial task relative to those in the mild discrepancy and constant rate of progress condition. These results were conceptually replicated in a second experiment (Duval et al., 1992) using participants' time to complete the total 2 hr and 10 min of remedial work as the dependent measure. This pattern suggests that the rate of progress was sufficient relative to the magnitude of the discrepancy in the mild discrepancy and constant rate of progress condition; this in turn promoted approaching the problem. In the high discrepancy and constant rate of progress condition and the high and mild discrepancy and low rate of progress conditions, rate of progress was insufficient and promoted avoiding the problem. In a third experiment (Duval et al., 1992), people were again led to believe that they were either highly or mildly discrepant from a standard on an intellectual dimension and then given the opportunity to reduce that deficiency by working on a remedial task. A
0341cd2fb49a56697edaf03b05734f44d0e41f89
An empirical study on dependence clusters for effort-aware fault-proneness prediction
A dependence cluster is a set of mutually inter-dependent program elements. Prior studies have found that large dependence clusters are prevalent in software systems. It has been suggested that dependence clusters have potentially harmful effects on software quality. However, little empirical evidence has been provided to support this claim. The study presented in this paper investigates the relationship between dependence clusters and software quality at the function-level with a focus on effort-aware fault-proneness prediction. The investigation first analyzes whether or not larger dependence clusters tend to be more fault-prone. Second, it investigates whether the proportion of faulty functions inside dependence clusters is significantly different from the proportion of faulty functions outside dependence clusters. Third, it examines whether or not functions inside dependence clusters playing a more important role than others are more fault-prone. Finally, based on two groups of functions (i.e., functions inside and outside dependence clusters), the investigation considers a segmented fault-proneness prediction model. Our experimental results, based on five well-known open-source systems, show that (1) larger dependence clusters tend to be more fault-prone; (2) the proportion of faulty functions inside dependence clusters is significantly larger than the proportion of faulty functions outside dependence clusters; (3) functions inside dependence clusters that play more important roles are more fault-prone; (4) our segmented prediction model can significantly improve the effectiveness of effort-aware fault-proneness prediction in both ranking and classification scenarios. These findings help us better understand how dependence clusters influence software quality.
b0f16acfa4efce9c24100ec330b82fb8a28feeec
Reinforcement Learning in Continuous State and Action Spaces
Many traditional reinforcement-learning algorithms have been designed for problems with small finite state and action spaces. Learn ing in such discrete problems can been difficult, due to noise and delayed reinfor cements. However, many real-world problems have continuous state or action sp aces, which can make learning a good decision policy even more involved. In this c apter we discuss how to automatically find good decision policies in continuous d omains. Because analytically computing a good policy from a continuous model c an be infeasible, in this chapter we mainly focus on methods that explicitly up date a representation of a value function, a policy or both. We discuss conside rations in choosing an appropriate representation for these functions and disc uss gradient-based and gradient-free ways to update the parameters. We show how to a pply these methods to reinforcement-learning problems and discuss many speci fic algorithms. Amongst others, we cover gradient-based temporal-difference lear ning, evolutionary strategies, policy-gradient algorithms and (natural) actor-cri ti methods. We discuss the advantages of different approaches and compare the perform ance of a state-of-theart actor-critic method and a state-of-the-art evolutiona ry strategy empirically.
2bf8acb0bd8b0fde644b91c5dd4bef2e8119e61e
Decision Support based on Bio-PEPA Modeling and Decision Tree Induction: A New Approach, Applied to a Tuberculosis Case Study
The problem of selecting determinant features generating appropriate model structure is a challenge in epidemiological modelling. Disease spread is highly complex, and experts develop their understanding of its dynamic over years. There is an increasing variety and volume of epidemiological data which adds to the potential confusion. The authors propose here to make use of that data to better understand disease systems. Decision tree techniques have been extensively used to extract pertinent information and improve decision making. In this paper, the authors propose an innovative structured approach combining decision tree induction with Bio-PEPA computational modelling, and illustrate the approach through application to tuberculosis. By using decision tree induction, the enhanced Bio-PEPA model shows considerable improvement over the initial model with regard to the simulated results matching observed data. The key finding is that the developer expresses a realistic predictive model using relevant features, thus considering this approach as decision support, empowers the epidemiologist in his policy decision making. KEywoRDS Bio-PEPA Modelling, Data Mining, Decision Support, Decision Tree Induction, Epidemiology, Modelling and Simulation, Optimisation, Refinement, Tuberculosis
e3ab7a95af2c0efc92f146f8667ff95e46da84f1
On Optimizing VLC Networks for Downlink Multi-User Transmission: A Survey
The evolving explosion in high data rate services and applications will soon require the use of untapped, abundant unregulated spectrum of the visible light for communications to adequately meet the demands of the fifth-generation (5G) mobile technologies. Radio-frequency (RF) networks are proving to be scarce to cover the escalation in data rate services. Visible light communication (VLC) has emerged as a great potential solution, either in replacement of, or complement to, existing RF networks, to support the projected traffic demands. Despite of the prolific advantages of VLC networks, VLC faces many challenges that must be resolved in the near future to achieve a full standardization and to be integrated to future wireless systems. Here, we review the new, emerging research in the field of VLC networks and lay out the challenges, technological solutions, and future work predictions. Specifically, we first review the VLC channel capacity derivation, discuss the performance metrics and the associated variables; the optimization of VLC networks are also discussed, including resources and power allocation techniques, user-to-access point (AP) association and APs-toclustered-users-association, APs coordination techniques, nonorthogonal multiple access (NOMA) VLC networks, simultaneous energy harvesting and information transmission using the visible light, and the security issue in VLC networks. Finally, we propose several open research problems to optimize the various VLC networks by maximizing either the sum rate, fairness, energy efficiency, secrecy rate, or harvested energy.
b1b5646683557b38468344dff09ae921a5a4b345
Comparison of CoAP and MQTT Performance Over Capillary Radios
The IoT protocols used in the application layer, namely the Constraint Application Protocol (CoAP) and Message Queue Telemetry Transport (MQTT) have dependencies to the transport layer. The choice of transport, Transmission Control Protocol (TCP) or the User Datagram Protocol(UDP), on the other hand, has an impact on the Internet of Things (IoT) application level performance, especially over a wireless medium. The motivation of this work is to look at the impact of the protocol stack on performance over two different wireless medium realizations, namely Bluetooth Low Energy and Wi-Fi. The use case studied is infrequent small reports sent from the sensor device to a central cloud storage over a last mile radio access link. We find that while CoAP/UDP based transport performs consistently better both in terms of latency and power consumption over both links, MQTT/TCP may also work when the use requirements allow for longerlatency providing better reliability. All in all, the full connectivity stack needs to be considered when designing an IoT deployment.
cd5b7d8fb4f8dc3872e773ec24460c9020da91ed
Design of a compact high power phased array for 5G FD-MIMO system at 29 GHz
This paper presents a new design concept of a beam steerable high gain phased array antenna based on WR28 waveguide at 29 GHz frequency for fifth generation (5G) full dimension multiple input multiple output (FD-MIMO) system. The 8×8 planar phased array is fed by a three dimensional beamformer to obtain volumetric beam scanning ranging from −60 to +60 degrees both in azimuth and elevation direction. Beamforming network (BFN) is designed using 16 set of 8×8 Butler matrix beamformer to get 64 beam states, which control the horizontal and vertical angle. This is a new concept to design waveguide based high power three-dimensional beamformer for volumetric multibeam in Ka band for 5G application. The maximum gain of phased array is 28.5 dBi that covers 28.9 GHz to 29.4 GHz frequency band.
b4cbe50b8988e7c9c1a7b982bfb6c708bb3ce3e8
Development and evaluation of low cost game-based balance rehabilitation tool using the microsoft kinect sensor
The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to "cheat" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury.
6a2311d02aea97f7fe4e78c8bd2a53091364dc3b
Aesthetics and Entropy III . Aesthetic measures 2
1 Consultant, Maplewood, MN USA 4 * Correspondence: sahyun@infionline.net; 1-(651)-927-9686 5 6 Abstract: We examined a series of real-world, pictorial photographs with varying 7 characteristics, along with their modification by noise addition and unsharp masking. As 8 response metrics we used three different versions of the aesthetic measure originally 9 proposed by Birkhoff. The first aesthetic measure, which has been used in other studies, 10 and which we used in our previous work as well, showed a preference for the least 11 complex of the images. It provided no justification for noise addition, but did reveal 12 enhancement on unsharp masking. Optimum level of unsharp masking varied with the 13 image, but was predictable from the individual image’s GIF compressibility. We expect this 14 result to be useful for guiding the processing of pictorial photographic imagery. The 15 second aesthetic measure, that of informational aesthetics based on entropy alone failed 16 to provide useful discrimination among the images or the conditions of their modification. 17 A third measure, derived from the concepts of entropy maximization, as well as the 18 hypothesized preference of observers for “simpler”, i.e., more compressible, images, 19 yielded qualitatively the same results as the more traditional version of the measure. 20 Differences among the photographs and the conditions of their modification were more 21 clearly defined with this metric, however. 22
97aef787d63aef75e6f8055cdac3771f8649f21a
A Syllable-based Technique for Word Embeddings of Korean Words
Word embedding has become a fundamental component to many NLP tasks such as named entity recognition and machine translation. However, popular models that learn such embeddings are unaware of the morphology of words, so it is not directly applicable to highly agglutinative languages such as Korean. We propose a syllable-based learning model for Korean using a convolutional neural network, in which word representation is composed of trained syllable vectors. Our model successfully produces morphologically meaningful representation of Korean words compared to the original Skip-gram embeddings. The results also show that it is quite robust to the Out-of-Vocabulary problem.
d9d8aafe6856025f2c2b7c70f5e640e03b6bcd46
Anti-phishing based on automated individual white-list
In phishing and pharming, users could be easily tricked into submitting their username/passwords into fraudulent web sites whose appearances look similar as the genuine ones. The traditional blacklist approach for anti-phishing is partially effective due to its partial list of global phishing sites. In this paper, we present a novel anti-phishing approach named Automated Individual White-List (AIWL). AIWL automatically tries to maintain a white-list of user's all familiar Login User Interfaces (LUIs) of web sites. Once a user tries to submit his/her confidential information to an LUI that is not in the white-list, AIWL will alert the user to the possible attack. Next, AIWL can efficiently defend against pharming attacks, because AIWL will alert the user when the legitimate IP is maliciously changed; the legitimate IP addresses, as one of the contents of LUI, are recorded in the white-list and our experiment shows that popular web sites' IP addresses are basically stable. Furthermore, we use Naïve Bayesian classifier to automatically maintain the white-list in AIWL. Finally, we conclude through experiments that AIWL is an efficient automated tool specializing in detecting phishing and pharming.
34feeafb5ff7757b67cf5c46da0869ffb9655310
Perpetual environmentally powered sensor networks
Environmental energy is an attractive power source for low power wireless sensor networks. We present Prometheus, a system that intelligently manages energy transfer for perpetual operation without human intervention or servicing. Combining positive attributes of different energy storage elements and leveraging the intelligence of the microprocessor, we introduce an efficient multi-stage energy transfer system that reduces the common limitations of single energy storage systems to achieve near perpetual operation. We present our design choices, tradeoffs, circuit evaluations, performance analysis, and models. We discuss the relationships between system components and identify optimal hardware choices to meet an application's needs. Finally we present our implementation of a real system that uses solar energy to power Berkeley's Telos Mote. Our analysis predicts the system will operate for 43 years under 1% load, 4 years under 10% load, and 1 year under 100% load. Our implementation uses a two stage storage system consisting of supercapacitors (primary buffer) and a lithium rechargeable battery (secondary buffer). The mote has full knowledge of power levels and intelligently manages energy transfer to maximize lifetime.
3689220c58f89e9e19cc0df51c0a573884486708
AmbiMax: Autonomous Energy Harvesting Platform for Multi-Supply Wireless Sensor Nodes
AmbiMax is an energy harvesting circuit and a supercapacitor based energy storage system for wireless sensor nodes (WSN). Previous WSNs attempt to harvest energy from various sources, and some also use supercapacitors instead of batteries to address the battery aging problem. However, they either waste much available energy due to impedance mismatch, or they require active digital control that incurs overhead, or they work with only one specific type of source. AmbiMax addresses these problems by first performing maximum power point tracking (MPPT) autonomously, and then charges supercapacitors at maximum efficiency. Furthermore, AmbiMax is modular and enables composition of multiple energy harvesting sources including solar, wind, thermal, and vibration, each with a different optimal size. Experimental results on a real WSN platform, Eco, show that AmbiMax successfully manages multiple power sources simultaneously and autonomously at several times the efficiency of the current state-of-the-art for WSNs
4833d690f7e0a4020ef48c1a537dbb5b8b9b04c6
Integrated photovoltaic maximum power point tracking converter
A low-power low-cost highly efficient maximum power point tracker (MPPT) to be integrated into a photovoltaic (PV) panel is proposed. This can result in a 25% energy enhancement compared to a standard photovoltaic panel, while performing functions like battery voltage regulation and matching of the PV array with the load. Instead of using an externally connected MPPT, it is proposed to use an integrated MPPT converter as part of the PV panel. It is proposed that this integrated MPPT uses a simple controller in order to be cost effective. Furthermore, the converter has to be very efficient, in order to transfer more energy to the load than a directly coupled system. This is achieved by using a simple soft-switched topology. A much higher conversion efficiency at lower cost will then result, making the MPPT an affordable solution for small PV energy systems.
61c1d66defb225eda47462d1bc393906772c9196
Hardware design experiences in ZebraNet
The enormous potential for wireless sensor networks to make a positive impact on our society has spawned a great deal of research on the topic, and this research is now producing environment-ready systems. Current technology limits coupled with widely-varying application requirements lead to a diversity of hardware platforms for different portions of the design space. In addition, the unique energy and reliability constraints of a system that must function for months at a time without human intervention mean that demands on sensor network hardware are different from the demands on standard integrated circuits. This paper describes our experiences designing sensor nodes and low level software to control them. In the ZebraNet system we use GPS technology to record fine-grained position data in order to track long term animal migrations [14]. The ZebraNet hardware is composed of a 16-bit TI microcontroller, 4 Mbits of off-chip flash memory, a 900 MHz radio, and a low-power GPS chip. In this paper, we discuss our techniques for devising efficient power supplies for sensor networks, methods of managing the energy consumption of the nodes, and methods of managing the peripheral devices including the radio, flash, and sensors. We conclude by evaluating the design of the ZebraNet nodes and discussing how it can be improved. Our lessons learned in developing this hardware can be useful both in designing future sensor nodes and in using them in real systems.
576803b930ef44b79028048569e7ea321c1cecb0
Adaptive Computer-Based Training Increases on the Job Performance of X-Ray Screeners
Due to severe terrorist attacks in recent years, aviation security issues have moved into the focus of politicians as well as the general public. Effective screening of passenger bags using state-of-the-art X-ray screening systems is essential to prevent terrorist attacks. The performance of the screening process depends critically on the security personnel, because they decide whether bags are OK or whether they might contain a prohibited item. Screening X-ray images of passenger bags for dangerous and prohibited items effectively and efficiently is a demanding object recognition task. Effectiveness of computer-based training (CBT) on X-ray detection performance was assessed using computer-based tests and on the job performance measures using threat image projection (TIP). It was found that adaptive CBT is a powerful tool to increase detection performance and efficiency of screeners in X-ray image interpretation. Moreover, the results of training could be generalized to the real life situation as shown in the increased detection performance in TIP not only for trained items, but also for new (untrained) items. These results illustrate that CBT is a very useful tool to increase airport security from a human factors perspective.
6c1ccc66420136488cf34c1ffe707afefd8b00b9
Rotation-discriminating template matching based on Fourier coefficients of radial projections with robustness to scaling and partial occlusion
We consider brightness/contrast-invariant and rotation-discriminating template matching that searches an image to analyze A for a query image Q. We propose to use the complex coefficients of the discrete Fourier transform of the radial projections to compute new rotation-invariant local features. These coefficients can be efficiently obtained via FFT. We classify templates in “stable” and “unstable” ones and argue that any local feature-based template matching may fail to find unstable templates. We extract several stable sub-templates of Q and find them in A by comparing the features. The matchings of the sub-templates are combined using the Hough transform. As the features of A are computed only once, the algorithm can find quickly many different sub-templates in A, and it is suitable for: finding many query images in A; multi-scale searching and partial occlusion-robust template matching.
3370784dacf9df1e54384190dad40b817520ba3a
Haswell: The Fourth-Generation Intel Core Processor
Haswell, Intel's fourth-generation core processor architecture, delivers a range of client parts, a converged core for the client and server, and technologies used across many products. It uses an optimized version of Intel 22-nm process technology. Haswell provides enhancements in power-performance efficiency, power management, form factor and cost, core and uncore microarchitecture, and the core's instruction set.
146da74cd886acbd4a593a55f0caacefa99714a6
Working model of Self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino
The evolution of Artificial Intelligence has served as the catalyst in the field of technology. We can now develop things which was once just an imagination. One of such creation is the birth of self-driving car. Days have come where one can do their work or even sleep in the car and without even touching the steering wheel, accelerator you will still be able to reach your target destination safely. This paper proposes a working model of self-driving car which is capable of driving from one location to the other or to say on different types of tracks such as curved tracks, straight tracks and straight followed by curved tracks. A camera module is mounted over the top of the car along with Raspberry Pi sends the images from real world to the Convolutional Neural Network which then predicts one of the following directions. i.e. right, left, forward or stop which is then followed by sending a signal from the Arduino to the controller of the remote controlled car and as a result of it the car moves in the desired direction without any human intervention.
6fd62c67b281956c3f67eb53fafaea83b2f0b4fb
Taking perspective into account in a communicative task
Previous neuroimaging studies of spatial perspective taking have tended not to activate the brain's mentalising network. We predicted that a task that requires the use of perspective taking in a communicative context would lead to the activation of mentalising regions. In the current task, participants followed auditory instructions to move objects in a set of shelves. A 2x2 factorial design was employed. In the Director factor, two directors (one female and one male) either stood behind or next to the shelves, or were replaced by symbolic cues. In the Object factor, participants needed to use the cues (position of the directors or symbolic cues) to select one of three possible objects, or only one object could be selected. Mere presence of the Directors was associated with activity in the superior dorsal medial prefrontal cortex (MPFC) and the superior/middle temporal sulci, extending into the extrastriate body area and the posterior superior temporal sulcus (pSTS), regions previously found to be responsive to human bodies and faces respectively. The interaction between the Director and Object factors, which requires participants to take into account the perspective of the director, led to additional recruitment of the superior dorsal MPFC, a region activated when thinking about dissimilar others' mental states, and the middle temporal gyri, extending into the left temporal pole. Our results show that using perspective taking in a communicative context, which requires participants to think not only about what the other person sees but also about his/her intentions, leads to the recruitment of superior dorsal MPFC and parts of the social brain network.
30b1447fbfdbd887a9c896a2b0d80177fc17c94e
3-Axis Magnetic Sensor Array System for Tracking Magnet's Position and Orientation
In medical diagnoses and treatments, e.g., the endoscopy, the dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we present a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source. It does not require the connection wire and power supply for excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some pre-determined spatial points can be detected, and the magnet's position and orientation parameters can be calculated based on an appropriate algorithm. Here, we propose a real-time tracking system built by Honeywell 3-axis magnetic sensors, HMC1053, as well as the computer sampling circuit. The results show that satisfactory tracking accuracy (average localization error is 3.3 mm) can be achieved using a sensor array with enough number of the 3-axis magnetic sensors
b551feaa696da1ba44c31e081555e50358c6eca9
A Polymer-Based Capacitive Sensing Array for Normal and Shear Force Measurement
In this work, we present the development of a polymer-based capacitive sensing array. The proposed device is capable of measuring normal and shear forces, and can be easily realized by using micromachining techniques and flexible printed circuit board (FPCB) technologies. The sensing array consists of a polydimethlysiloxane (PDMS) structure and a FPCB. Each shear sensing element comprises four capacitive sensing cells arranged in a 2 × 2 array, and each capacitive sensing cell has two sensing electrodes and a common floating electrode. The sensing electrodes as well as the metal interconnect for signal scanning are implemented on the FPCB, while the floating electrodes are patterned on the PDMS structure. This design can effectively reduce the complexity of the capacitive structures, and thus makes the device highly manufacturable. The characteristics of the devices with different dimensions were measured and discussed. A scanning circuit was also designed and implemented. The measured maximum sensitivity is 1.67%/mN. The minimum resolvable force is 26 mN measured by the scanning circuit. The capacitance distributions induced by normal and shear forces were also successfully captured by the sensing array.
bb17e8858b0d3a5eba2bb91f45f4443d3e10b7cd
The Balanced Scorecard: Translating Strategy Into Action