_id
stringlengths
40
40
title
stringlengths
8
300
text
stringlengths
0
10k
db3fed1a69977d9b0cf2a3a44b05854940f80430
Improved Text Extraction from PDF Documents for Large-Scale Natural Language Processing
The inability of reliable text extraction from arbitrary documents is often an obstacle for large scale NLP based on resources crawled from the Web. One of the largest problems in the conversion of PDF documents is the detection of the boundaries of common textual units such as paragraphs, sentences and words. PDF is a file format optimized for printing and encapsulates a complete description of the layout of a document including text, fonts, graphics and so on. This paper describes a tool for extracting texts from arbitrary PDF files for the support of largescale data-driven natural language processing. Our approach combines the benefits of several existing solutions for the conversion of PDF documents to plain text and adds a language-independent post-processing procedure that cleans the output for further linguistic processing. In particular, we use the PDF-rendering libraries pdfXtk, Apache Tika and Poppler in various configurations. From the output of these tools we recover proper boundaries using on-the-fly language models and languageindependent extraction heuristics. In our research, we looked especially at publications from the European Union, which constitute a valuable multilingual resource, for example, for training statistical machine translation models. We use our tool for the conversion of a large multilingual database crawled from the EU bookshop with the aim of building parallel corpora. Our experiments show that our conversion software is capable of fixing various common issues leading to cleaner data sets in the end.
77edc5099fc2df0efe644a0ea63a936ac2ac0940
Depeche Mood: a Lexicon for Emotion Analysis from Crowd Annotated News
While many lexica annotated with words polarity are available for sentiment analysis, very few tackle the harder task of emotion analysis and are usually quite limited in coverage. In this paper, we present a novel approach for extracting – in a totally automated way – a highcoverage and high-precision lexicon of roughly 37 thousand terms annotated with emotion scores, called DepecheMood. Our approach exploits in an original way ‘crowd-sourced’ affective annotation implicitly provided by readers of news articles from rappler.com. By providing new state-of-the-art performances in unsupervised settings for regression and classification tasks, even using a naı̈ve approach, our experiments show the beneficial impact of harvesting social media data for affective lexicon building.
7748514058675841f46836a9bc3b6aa8ab76c9ca
Response and Habituation of the Human Amygdala during Visual Processing of Facial Expression
We measured amygdala activity in human volunteers during rapid visual presentations of fearful, happy, and neutral faces using functional magnetic resonance imaging (fMRI). The first experiment involved a fixed order of conditions both within and across runs, while the second one used a fully counterbalanced order in addition to a low level baseline of simple visual stimuli. In both experiments, the amygdala was preferentially activated in response to fearful versus neutral faces. In the counterbalanced experiment, the amygdala also responded preferentially to happy versus neutral faces, suggesting a possible generalized response to emotionally valenced stimuli. Rapid habituation effects were prominent in both experiments. Thus, the human amygdala responds preferentially to emotionally valenced faces and rapidly habituates to them.
8de30f8ec37ad31525f753ab2ccbace638291bad
Deception through telling the truth ? ! : Experimental evidence from individuals and teams
Informational asymmetries abound in economic decision making and often provide an incentive for deception through telling a lie or misrepresenting information. In this paper I use a cheap-talk sender-receiver experiment to show that telling the truth should be classified as deception too if the sender chooses the true message with the expectation that the receiver will not follow the sender’s (true) message. The experimental data reveal a large degree of ‘sophisticated’ deception through telling the truth. The robustness of my broader definition of deception is confirmed in an experimental treatment where teams make decisions. JEL-classification: C72, C91, D82
1fbc1cc8a85b50c15742672033f51a2f57f86692
Metacognitive Beliefs About Procrastination : Development and Concurrent Validity of a Self-Report Questionnaire
This article describes the development of a questionnaire on metacognitive beliefs about procrastination. In Study 1 we performed a principal axis factor analysis that suggested a twofactor solution for the data obtained from the preliminary questionnaire. The factors identified were named positive and negative metacognitive beliefs about procrastination. The factor analysis reduced the questionnaire from 22 to 16 items, with each factor consisting of 8 items. In Study 2 we performed a confirmatory factor analysis that provided support for the twofactor solution suggested by the exploratory factor analysis. Both factors had adequate internal consistency. Concurrent validity was partially established through correlation analyses. These showed that positive metacognitive beliefs about procrastination were positively correlated with decisional procrastination, and that negative metacognitive beliefs about procrastination were positively correlated with both decisional and behavioral procrastination. The Metacognitive Beliefs About Procrastination Questionnaire may aid future research into procrastination and facilitate clinical assessment and case formulation.
672c4db75dd626d6b8f152ec8ddfe0171ffe5f8b
One-Class Kernel Spectral Regression for Outlier Detection
The paper introduces a new efficient nonlinear oneclass classifier formulated as the Rayleigh quotient criterion optimisation. The method, operating in a reproducing kernel Hilbert subspace, minimises the scatter of target distribution along an optimal projection direction while at the same time keeping projections of positive observations distant from the mean of the negative class. We provide a graph embedding view of the problem which can then be solved efficiently using the spectral regression approach. In this sense, unlike previous similar methods which often require costly eigen-computations of dense matrices, the proposed approach casts the problem under consideration into a regression framework which is computationally more efficient. In particular, it is shown that the dominant complexity of the proposed method is the complexity of computing the kernel matrix. Additional appealing characteristics of the proposed one-class classifier are: 1-the ability to be trained in an incremental fashion (allowing for application in streaming data scenarios while also reducing the computational complexity in a non-streaming operation mode); 2-being unsupervised, but providing the option for refining the solution using negative training examples, when available; Last but not least, 3-the use of the kernel trick which facilitates a nonlinear mapping of the data into a high-dimensional feature space to seek better solutions. Extensive experiments conducted on several datasets verify the merits of the proposed approach in comparison with other alternatives.
f05f698f418478575b8a1be34b5020e08f9fbba2
Very greedy crossover in a genetic algorithm for the traveling salesman problem
In the traveling salesman problem, we are given a set of cities and the distances between them, and we seek a shortest tour that visits each city exactly once and returns to the starting city. Many researchers have described genetic algorithms for this problem, and they have often focused on the crossover operator, which builds offspring tours by combining two parental tours. Very greedy crossover extends several of these operators; as it builds a tour, it always appends the shortest parental edge to a city not yet visited, if there is such an edge. A steady-state genetic algorithm using this operator, mutation by inversion, and rank-based probabilities for both selection and deletion shows good results on a suite of flve test problems.
867e2293e9780b729705b4ba48d6b11e3778e999
Phishing detection based Associative Classification data mining
Website phishing is considered one of the crucial security challenges for the online community due to the massive numbers of online transactions performed on a daily basis. Website phishing can be described as mimicking a trusted website to obtain sensitive information from online users such as usernames and passwords. Black lists, white lists and the utilisation of search methods are examples of solutions to minimise the risk of this problem. One intelligent approach based on data mining called Associative Classification (AC) seems a potential solution that may effectively detect phishing websites with high accuracy. According to experimental studies, AC often extracts classifiers containing simple ‘‘If-Then’’ rules with a high degree of predictive accuracy. In this paper, we investigate the problem of website phishing using a developed AC method called Multi-label Classifier based Associative Classification (MCAC) to seek its applicability to the phishing problem. We also want to identify features that distinguish phishing websites from legitimate ones. In addition, we survey intelligent approaches used to handle the phishing problem. Experimental results using real data collected from different sources show that AC particularly MCAC detects phishing websites with higher accuracy than other intelligent algorithms. Further, MCAC generates new hidden knowledge (rules) that other algorithms are unable to find and this has improved its classifiers predictive performance. 2014 Elsevier Ltd. All rights reserved.
ba1c6e772e72fd04fba3585cdd75e063286f4f6d
Assessing the severity of phishing attacks: A hybrid data mining approach
Available online 19 August 2010
12d6cf6346f6d693b6dc3b88d176a8a7b192355c
Why phishing works
To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23% of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40% of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed.
1b99fe2f4e680ebb7fe82ec7054d034c8ab8c79d
Decision strategies and susceptibility to phishing
Phishing emails are semantic attacks that con people into divulging sensitive information using techniques to make the user believe that information is being requested by a legitimate source. In order to develop tools that will be effective in combating these schemes, we first must know how and why people fall for them. This study reports preliminary analysis of interviews with 20 non-expert computer users to reveal their strategies and understand their decisions when encountering possibly suspicious emails. One of the reasons that people may be vulnerable to phishing schemes is that awareness of the risks is not linked to perceived vulnerability or to useful strategies in identifying phishing emails. Rather, our data suggest that people can manage the risks that they are most familiar with, but don't appear to extrapolate to be wary of unfamiliar risks. We explore several strategies that people use, with varying degrees of success, in evaluating emails and in making sense of warnings offered by browsers attempting to help users navigate the web.
9458b89b323af84b4635dbcf7d18114f9af19c96
Wearable Real-Time Stereo Vision for the Visually Impaired
In this paper, a development of image processing, stereovision methodology and a sonification procedure for image sonification system for vision substitution are presented. The hardware part consists of a sunglass fitted with two mini cameras, laptop computer and stereo earphones. The image of the scene in front of blind people is captured by stereo cameras. The captured image is processed to enhance the important features in the scene in front of blind user. The image processing is designed to extract the objects from the image and the stereo vision method is applied to calculate the disparity which is required to determine the distance between the blind user and the objects. The processed image is mapped on to stereo sound for the blind’s understanding of the scene in front. Experimentations were conducted in the indoor environment and the proposed methodology is found to be effective for object identification, and thus the sound produced will assists the visually impaired for their collision free navigation.
896e160b98d52d13a97caa664038e37e86075ee4
NIMA: Neural Image Assessment
Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications, such as evaluating image capture pipelines, storage techniques, and sharing media. Despite the subjective nature of this problem, most existing methods only predict the mean opinion score provided by data sets, such as AVA and TID2013. Our approach differs from others in that we predict the distribution of human opinion scores using a convolutional neural network. Our architecture also has the advantage of being significantly simpler than other methods with comparable performance. Our proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks. Our resulting network can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing/enhancement algorithms in a photographic pipeline. All this is done without need for a “golden” reference image, consequently allowing for single-image, semantic- and perceptually-aware, no-reference quality assessment.
8fd1910454feb9a28741992b87a271381edd1af8
Convolutional neural networks and multimodal fusion for text aided image classification
With the exponential growth of web meta-data, exploiting multimodal online sources via standard search engine has become a trend in visual recognition as it effectively alleviates the shortage of training data. However, the web meta-data such as text data is usually not as cooperative as expected due to its unstructured nature. To address this problem, this paper investigates the numerical representation of web text data. We firstly adopt convolutional neural network (CNN) for web text modeling on top of word vectors. Combined with CNN for image, we present a multimodal fusion to maximize the discriminative power of visual and textual modality data for decision level and feature level simultaneously. Experimental results show that the proposed framework achieves significant improvement in large-scale image classification on Pascal VOC-2007 and VOC-2012 datasets.
a91c89c95dbb49b924434cd00dfcef7e635ea8cb
Wireless Measurement of RFID IC Impedance
Accurate knowledge of the input impedance of a radio-frequency identification (RFID) integrated circuit (IC) at its wake-up power is valuable as it enables the design of a performance-optimized tag for a specific IC. However, since the IC impedance is power dependent, few methods exist to measure it without advanced equipment. We propose and demonstrate a wireless method, based on electromagnetic simulation and threshold power measurement, applicable to fully assembled RFID tags, to determine the mounted IC's input impedance in the absorbing state, including any parasitics arising from the packaging and the antenna-IC connection. The proposed method can be extended to measure the IC's input impedance in the modulating state as well.
9422b8b7bec2ec3860902481ff2977211d65112f
Real-time High Performance Anomaly Detection over Data Streams: Grand Challenge
Real-time analytics over data streams are crucial for a wide range of use cases in industry and research. Today's sensor systems can produce high throughput data streams that have to be analyzed in real-time. One important analytic task is anomaly or outlier detection from the streaming data. In many industry applications, sensing devices produce a data stream that can be monitored to know the correct operation of industry devices and consequently avoid damages by triggering reactions in real-time. While anomaly detection is a well-studied topic in data mining, the real-time high-performance anomaly detection from big data streams require special studies and well-optimized implementation. This paper presents our implementation of a real-time anomaly detection system over data streams. We outline details of our two separate implementations using the Java and C++ programming languages, and provide technical details about the data processing pipelines. We report experimental results and describe performance tuning strategies.
a70198f652f40ec18a7416ffb1fc858d1a203f84
MORE ON THE FIBONACCI SEQUENCE AND HESSENBERG MATRICES
Five new classes of Fibonacci-Hessenberg matrices are introduced. Further, we introduce the notion of two-dimensional Fibonacci arrays and show that three classes of previously known Fibonacci-Hessenberg matrices and their generalizations satisfy this property. Simple systems of linear equations are given whose solutions are Fibonacci fractions.
ed464cf1f7ceff8e9bb1497c746c8abb510835a6
Municipal solid waste source-separated collection in China: A comparative analysis.
A pilot program focusing on municipal solid waste (MSW) source-separated collection was launched in eight major cities throughout China in 2000. Detailed investigations were carried out and a comprehensive system was constructed to evaluate the effects of the eight-year implementation in those cities. This paper provides an overview of different methods of collection, transportation, and treatment of MSW in the eight cities; as well as making a comparative analysis of MSW source-separated collection in China. Information about the quantity and composition of MSW shows that the characteristics of MSW are similar, which are low calorific value, high moisture content and high proportion of organisms. Differences which exist among the eight cities in municipal solid waste management (MSWM) are presented in this paper. Only Beijing and Shanghai demonstrated a relatively effective result in the implementation of MSW source-separated collection. While the six remaining cities result in poor performance. Considering the current status of MSWM, source-separated collection should be a key priority. Thus, a wider range of cities should participate in this program instead of merely the eight pilot cities. It is evident that an integrated MSWM system is urgently needed. Kitchen waste and recyclables are encouraged to be separated at the source. Stakeholders involved play an important role in MSWM, thus their responsibilities should be clearly identified. Improvement in legislation, coordination mechanisms and public education are problematic issues that need to be addressed.
8cfb316b3233d9b598265e3b3d40b8b064014d63
Video classification with Densely extracted HOG/HOF/MBH features: an evaluation of the accuracy/computational efficiency trade-off
The current state-of-the-art in video classification is based on Bag-of-Words using local visual descriptors. Most commonly these are histogram of oriented gradients (HOG), histogram of optical flow (HOF) and motion boundary histograms (MBH) descriptors. While such approach is very powerful for classification, it is also computationally expensive. This paper addresses the problem of computational efficiency. Specifically: (1) We propose several speed-ups for densely sampled HOG, HOF and MBH descriptors and release Matlab code; (2) We investigate the trade-off between accuracy and computational efficiency of descriptors in terms of frame sampling rate and type of Optical Flow method; (3) We investigate the trade-off between accuracy and computational efficiency for computing the feature vocabulary, using and comparing most of the commonly adopted vector quantization techniques: $$k$$ k -means, hierarchical $$k$$ k -means, Random Forests, Fisher Vectors and VLAD.
e8b5c97a5d9e1a4ece710b2ab1ba93f659e6bb9c
A Faster Scrabble Move Generation Algorithm
Appel and Jacobson1 presented a fast algorithm for generating every possible move in a given position in the game of Scrabble using a DAWG, a finite automaton derived from the trie of a large lexicon. This paper presents a faster algorithm that uses a GADDAG, a finite automaton that avoids the non-deterministic prefix generation of the DAWG algorithm by encoding a bidirectional path starting from each letter of each word in the lexicon. For a typical lexicon, the GADDAG is nearly five times larger than the DAWG, but generates moves more than twice as fast. This time/space trade-off is justified not only by the decreasing cost of computer memory, but also by the extensive use of move-generation in the analysis of board positions used by Gordon in the probabilistic search for the most appropriate play in a given position within realistic time constraints.
6da978116b0a492decb8d2860540146d5aa7e170
FraudFind: Financial fraud detection by analyzing human behavior
Financial fraud is commonly represented by the use of illegal practices where they can intervene from senior managers until payroll employees, becoming a crime punishable by law. There are many techniques developed to analyze, detect and prevent this behavior, being the most important the fraud triangle theory associated with the classic financial audit model. In order to perform this research, a survey of the related works in the existing literature was carried out, with the purpose of establishing our own framework. In this context, this paper presents FraudFind, a conceptual framework that allows to identify and outline a group of people inside an banking organization who commit fraud, supported by the fraud triangle theory. FraudFind works in the approach of continuous audit that will be in charge of collecting information of agents installed in user's equipment. It is based on semantic techniques applied through the collection of phrases typed by the users under study for later being transferred to a repository for later analysis. This proposal encourages to contribute with the field of cybersecurity, in the reduction of cases of financial fraud.
42ca3deda9064f7bc93aa4cca783dbfb71a292d4
Social Brand Value and the Value Enhancing Role of Social Media Relationships for Brands
Due to the social media revolution and the emergence of communities, social networks, and user generated content portals, prevalent branding concepts need to catch up with this reality. Given the importance of social ties, social interactions and social identity in the new media environment, there is a need to account for a relationship measure in marketing and branding. Based on the concept of social capital we introduce the concept of social brand value, defined as the perceived value derived by exchange and interactions with other users of the brand within a `community. Within a qualitative study marketing experts were interviewed and highlighted the importance towards social media activities, but also indicated that they do not have a clear picture on how strategies should look like and how their success can be measured. A second quantitative study was conducted which demonstrates the influence the social brand value construct has for consumers brand evangelism and willingness to pay a price premium and hence the value contribution of the social brand value for consumers.
71ac04e7af020c4fe5a609730ab73fb0ab8b2bfd
Information security incident management process
The modern requirements and the best practices in the field of Information Security (IS) Incident Management Process (ISIMP) are analyzed. "IS event" and "IS incident" terms, being used for ISIMP, have been defined. An approach to ISIMP development has been created. According to this approach ISIMP processes are described. As an example the «Vulnerabilities, IS events and incidents detection and notification» joint process is examined in detail.
b22685ddab32febe76a5bcab358387a7c73d0f68
Directly Addressable Variable-Length Codes
We introduce a symbol reordering technique that implicitly synchronizes variable-length codes, such that it is possible to directly access the i-th codeword without need of any sampling method. The technique is practical and has many applications to the representation of ordered sets, sparse bitmaps, partial sums, and compressed data structures for suffix trees, arrays, and inverted indexes, to name just a few. We show experimentally that the technique offers a competitive alternative to other data structures that handle this problem.
2579b2066d0fcbeda5498f5053f201b10a8e254b
Deconstructing the Ladder Network Architecture
The manual labeling of data is and will remain a costly endeavor. For this reason, semi-supervised learning remains a topic of practical importance. The recently proposed Ladder Network is one such approach that has proven to be very successful. In addition to the supervised objective, the Ladder Network also adds an unsupervised objective corresponding to the reconstruction costs of a stack of denoising autoencoders. Although the empirical results are impressive, the Ladder Network has many components intertwined, whose contributions are not obvious in such a complex architecture. In order to help elucidate and disentangle the different ingredients in the Ladder Network recipe, this paper presents an extensive experimental investigation of variants of the Ladder Network in which we replace or remove individual components to gain more insight into their relative importance. We find that all of the components are necessary for achieving optimal performance, but they do not contribute equally. For semi-supervised tasks, we conclude that the most important contribution is made by the lateral connection, followed by the application of noise, and finally the choice of what we refer to as the ‘combinator function’ in the decoder path. We also find that as the number of labeled training examples increases, the lateral connections and reconstruction criterion become less important, with most of the improvement in generalization being due to the injection of noise in each layer. Furthermore, we present a new type of combinator function that outperforms the original design in both fullyand semi-supervised tasks, reducing record test error rates on Permutation-Invariant MNIST to 0.57% for the supervised setting, and to 0.97% and 1.0% for semisupervised settings with 1000 and 100 labeled examples respectively.
9992626e8e063c1b23e1920efd63ab4f008710ac
Using PMU Data to Increase Situational Awareness Final Project Report
9fe6002f53c4ca692c232d85a64c617ac3db3b18
Recent Advances in Indoor Localization: A Survey on Theoretical Approaches and Applications
The availability of location information has become a key factor in today’s communications systems allowing location based services. In outdoor scenarios, the mobile terminal position is obtained with high accuracy thanks to the global positioning system (GPS) or to the standalone cellular systems. However, the main problem of GPS and cellular systems resides in the indoor environment and in scenarios with deep shadowing effects where the satellite or cellular signals are broken. In this paper, we survey different technologies and methodologies for indoor and outdoor localization with an emphasis on indoor methodologies and concepts. Additionally, we discuss in this review different localization-based applications, where the location information is critical to estimate. Finally, a comprehensive discussion of the challenges in terms of accuracy, cost, complexity, security, scalability, etc. is given. The aim of this survey is to provide a comprehensive overview of existing efforts as well as auspicious and anticipated dimensions for future work in indoor localization techniques and applications.
ce086e383e1ef18f60eedd2248fcefd8bf4a213d
Speed control and electrical braking of axial flux BLDC motor
Axial flux brushless direct current motors (AFBLDC) are becoming popular in many applications including electrical vehicles because of their ability to meet the demand of high power density, high efficiency, wide speed range, robustness, low cost and less maintenance. In this paper, AFBLDC motor drive with single sided configuration having 24 stator poles and 32 permanent magnets on the rotor is proposed. It is driven by six pulse inverter that is fed from a single phase AC supply through controlled AC to DC converter. The speed control and braking methods are also proposed based on pulse width modulation technique. The overall scheme is simulated in MATLAB environment and tested under different operating conditions. A prototype of proposed AFBLDC motor drive is designed and fabricated. The control methods are implemented using DSC dsPIC33EP256MC202 digital signal controller. Tests are performed on this prototype to validate its performance at different speeds with and without braking mode. It is observed that the proposed scheme works effectively and can be used as wheel direct driven motor for electrical vehicle.
db5baea8e4b2dbe725e587c023b60b5a1658afe1
Visual Gesture Character String Recognition by Classification-Based Segmentation with Stroke Deletion
The recognition of character strings in visual gestures has many potential applications, yet the segmentation of characters is a great challenge since the pen lift information is not available. In this paper, we propose a visual gesture character string recognition method using the classification-based segmentation strategy. In addition to the character classifier and character geometry models used for evaluating candidate segmentation-recognition paths, we introduce deletion geometry models for deleting stroke segments that are likely to be ligatures. To perform experiments, we built a Kinect-based fingertip trajectory capturing system to collect gesture string data. Experiments of digit string recognition show that the deletion geometry models improve the string recognition accuracy significantly. The string-level correct rate is over 80%.
225f7d72eacdd136b0ceb0a522e3a3930c5af9b8
Automatic Identification of Word Translations from Unrelated English and German Corpora
Algorithms for the alignment of words in translated texts are w ell established. However, only recently, new approaches have been proposed to identify word translations f rom non-parallel or even unrelated texts. This task is more difficult, because most statistical clues useful in the processing of parallel texts can not be applied for non-parallel tex ts. For this reason, whereas for parallel texts in some studies up to 99% of the word alignments have been shown to be correct, the accuracy for non-parallel texts has been around 30% up to now. The current study, which is based on the assumption that there is a correlation between the patterns of word cooccurrences in corpora of different languages, makes a significant improvement to about 72% of word translations identified correctly.
11a81c78412f6a8a70b9e450260fb30257126817
Clustering on Multi-Layer Graphs via Subspace Analysis on Grassmann Manifolds
Relationships between entities in datasets are often of multiple nature, like geographical distance, social relationships, or common interests among people in a social network, for example. This information can naturally be modeled by a set of weighted and undirected graphs that form a global multi-layer graph, where the common vertex set represents the entities and the edges on different layers capture the similarities of the entities in term of the different modalities. In this paper, we address the problem of analyzing multi-layer graphs and propose methods for clustering the vertices by efficiently merging the information provided by the multiple modalities. To this end, we propose to combine the characteristics of individual graph layers using tools from subspace analysis on a Grassmann manifold. The resulting combination can then be viewed as a low dimensional representation of the original data which preserves the most important information from diverse relationships between entities. As an illustrative application of our framework, we use our algorithm in clustering methods and test its performance on several synthetic and real world datasets where it is shown to be superior to baseline schemes and competitive to state-of-the-art techniques. Our generic framework further extends to numerous analysis and learning problems that involve different types of information on graphs.
f2fa6cc53919ab8f310cbc56de2f85bd9e07c9a6
Solar powered unmanned aerial vehicle for continuous flight: Conceptual overview and optimization
An aircraft that is capable of continuous flight offers a new level of autonomous capacity for unmanned aerial vehicles. We present an overview of the components and concepts of a small scale unmanned aircraft that is capable of sustaining powered flight without a theoretical time limit. We then propose metrics that quantify the robustness of continuous flight achieved and optimization criteria to maximize these metrics. Finally, the criteria are applied to a fabricated and flight tested small scale high efficiency aircraft prototype to determine the optimal battery and photovoltaic array mass for robust continuous flight.
1617124b134afed8b369f32640b56674caba5e4d
Aerial acoustic communications
This paper describes experiments in using audible sound as a means for wireless device communications. The direct application of standard modulation techniques to sound, without further improvements, results in sounds that are immediately perceived as digital communications and that are fairly aggressive and intrusive. We observe that some parameters of the modulation that have an impact in the data rate, the error probability and the computational overhead at the receiver also have a tremendous impact in the quality of the sound as perceived by humans. This paper focuses on how to vary those parameters in standard modulation techniques such as ASK, FSK and Spread-Spectrum to obtain communication systems in which the messages are musical and other familiar sounds, rather than modem sounds. A prototype called Digital Voices demonstrates the feasibility of this music-based communication technology. Our goal is to lay out the basis of sound design for aerial acoustic communications so that the presence of such communications, though noticeable, is not intrusive and can even be considered as part of musical compositions and sound tracks.
3979cf5a013063e98ad0caf2e7110c2686cf1640
Basic local alignment search tool.
A new approach to rapid sequence comparison, basic local alignment search tool (BLAST), directly approximates alignments that optimize a measure of local similarity, the maximal segment pair (MSP) score. Recent mathematical results on the stochastic properties of MSP scores allow an analysis of the performance of this method as well as the statistical significance of alignments it generates. The basic algorithm is simple and robust; it can be implemented in a number of ways and applied in a variety of contexts including straightforward DNA and protein sequence database searches, motif searches, gene identification searches, and in the analysis of multiple regions of similarity in long DNA sequences. In addition to its flexibility and tractability to mathematical analysis, BLAST is an order of magnitude faster than existing sequence comparison tools of comparable sensitivity.
04d9d54fa90e0cb54dbd63f2f42688b4cd2f6f99
CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice.
The sensitivity of the commonly used progressive multiple sequence alignment method has been greatly improved for the alignment of divergent protein sequences. Firstly, individual weights are assigned to each sequence in a partial alignment in order to down-weight near-duplicate sequences and up-weight the most divergent ones. Secondly, amino acid substitution matrices are varied at different alignment stages according to the divergence of the sequences to be aligned. Thirdly, residue-specific gap penalties and locally reduced gap penalties in hydrophilic regions encourage new gaps in potential loop regions rather than regular secondary structure. Fourthly, positions in early alignments where gaps have been opened receive locally reduced gap penalties to encourage the opening up of new gaps at these positions. These modifications are incorporated into a new program, CLUSTAL W which is freely available.
78edc6c1f80cd2343b3b9d185453cac07a152663
Text-based adventures of the golovin AI agent
The domain of text-based adventure games has been recently established as a new challenge of creating the agent that is both able to understand natural language, and acts intelligently in text-described environments. In this paper, we present our approach to tackle the problem. Our agent, named Golovin, takes advantage of the limited game domain. We use genre-related corpora (including fantasy books and decompiled games) to create language models suitable to this domain. Moreover, we embed mechanisms that allow us to specify, and separately handle, important tasks as fighting opponents, managing inventory, and navigating on the game map. We validated usefulness of these mechanisms, measuring agent's performance on the set of 50 interactive fiction games. Finally, we show that our agent plays on a level comparable to the winner of the last year Text-Based Adventure AI Competition.
ffc5c53be8ec7e8e60d7a03f3e874bc1ca2b9f2d
One for All: Towards Language Independent Named Entity Linking
Entity linking (EL) is the task of disambiguating mentions in text by associating them with entries in a predefined database of mentions (persons, organizations, etc). Most previous EL research has focused mainly on one language, English, with less attention being paid to other languages, such as Spanish or Chinese. In this paper, we introduce LIEL, a Language Independent Entity Linking system, which provides an EL framework which, once trained on one language, works remarkably well on a number of different languages without change. LIEL makes a joint global prediction over the entire document, employing a discriminative reranking framework with many domain and language-independent feature functions. Experiments on numerous benchmark datasets, show that the proposed system, once trained on one language, English, outperforms several state-of-the-art systems in English (by 4 points) and the trained model also works very well on Spanish (14 points better than a competitor system), demonstrating the viability of the approach.
1a8fd4b2f127d02f70f1c94f330628be31d18681
An approach to fuzzy control of nonlinear systems: stability and design issues
c7e6b6b3b98e992c3fddd93e7a7c478612dacf94
Mining Professional's Data from LinkedIn
Social media has become very popular communication tool among internet users in the recent years. A large unstructured data is available for analysis on the social web. The data available on these sites have redundancies as users are free to enter the data according to their knowledge and interest. This data needs to be normalized before doing any analysis due to the presence of various redundancies in it. In this paper, LinkedIn data is extracted by using LinkedIn API and normalized by removing redundancies. Further, data is also normalized according to locations of LinkedIn connections using geo coordinates provided by Microsoft Bing. Then, clustering of this normalized data set is done according to job title, company names and geographic locations using Greedy, Hierarchical and K-Means clustering algorithms and clusters are visualized to have a better insight into them.
bf3416ea02ea0d9327a7886136d7d7d5a66cf491
What do online behavioral advertising privacy disclosures communicate to users?
Online Behavioral Advertising (OBA), the practice of tailoring ads based on an individual's online activities, has led to privacy concerns. In an attempt to mitigate these privacy concerns, the online advertising industry has proposed the use of OBA disclosures: icons, accompanying taglines, and landing pages intended to inform users about OBA and provide opt-out options. We conducted a 1,505-participant online study to investigate Internet users' perceptions of OBA disclosures. The disclosures failed to clearly notify participants about OBA and inform them about their choices. Half of the participants remembered the ads they saw but only 12% correctly remembered the disclosure taglines attached to ads. When shown the disclosures again, the majority mistakenly believed that ads would pop up if they clicked on disclosures, and more participants incorrectly thought that clicking the disclosures would let them purchase advertisements than correctly understood that they could then opt out of OBA. "AdChoices", the most commonly used tagline, was particularly ineffective at communicating notice and choice. A majority of participants mistakenly believed that opting out would stop all online tracking, not just tailored ads. We dicuss challenges in crafting disclosures and provide suggestions for improvement.
a32e46aba17837384af88c8b74e8d7ef702c35f6
Discrete Wigner Function Derivation of the Aaronson-Gottesman Tableau Algorithm
The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-d qudits that has the same time and space complexity as the Aaronson–Gottesman algorithm for qubits. We show that the efficiency of both algorithms is due to harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm for odd-d and the Aaronson–Gottesman algorithm for qubits are likely due only to the fact that the Weyl–Heisenberg group is not in SU(d) for d = 2 and that qubits exhibit state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits.
386fd8503314f0fa289cf52244fc71f851f20770
LRAGE: Learning Latent Relationships With Adaptive Graph Embedding for Aerial Scene Classification
The performance of scene classification relies heavily on the spatial and structural features that are extracted from high spatial resolution remote-sensing images. Existing approaches, however, are limited in adequately exploiting latent relationships between scene images. Aiming to decrease the distances between intraclass images and increase the distances between interclass images, we propose a latent relationship learning framework that integrates an adaptive graph with the constraints of the feature space and label propagation for high-resolution aerial image classification. To describe the latent relationships among scene images in the framework, we construct an adaptive graph that is embedded into the constrained joint space for features and labels. To remove redundant information and improve the computational efficiency, subspace learning is introduced to assist in the latent relationship learning. To address out-of-sample data, linear regression is adopted to project the semisupervised classification results onto a linear classifier. Learning efficiency is improved by minimizing the objective function via the linearized alternating direction method with an adaptive penalty. We test our method on three widely used aerial scene image data sets. The experimental results demonstrate the superior performance of our method over the state-of-the-art algorithms in aerial scene image classification.
49b176947e06521e7c5d6966c7f94c78a1a975a8
Forming a story: the health benefits of narrative.
Writing about important personal experiences in an emotional way for as little as 15 minutes over the course of three days brings about improvements in mental and physical health. This finding has been replicated across age, gender, culture, social class, and personality type. Using a text-analysis computer program, it was discovered that those who benefit maximally from writing tend to use a high number of positive-emotion words, a moderate amount of negative-emotion words, and increase their use of cognitive words over the days of writing. These findings suggest that the formation of a narrative is critical and is an indicator of good mental and physical health. Ongoing studies suggest that writing serves the function of organizing complex emotional experiences. Implications for these findings for psychotherapy are briefly discussed.
d880d303ee0bfdbc80fc34df0978088cd15ce861
Video Anomaly Detection and Localization via Gaussian Mixture Fully Convolutional Variational Autoencoder
 Abstract—We present a novel end-to-end partially supervised deep learning approach for video anomaly detection and localization using only normal samples. The insight that motivates this study is that the normal samples can be associated with at least one Gaussian component of a Gaussian Mixture Model (GMM), while anomalies either do not belong to any Gaussian component. The method is based on Gaussian Mixture Variational Autoencoder, which can learn feature representations of the normal samples as a Gaussian Mixture Model trained using deep learning. A Fully Convolutional Network (FCN) that does not contain a fully-connected layer is employed for the encoder-decoder structure to preserve relative spatial coordinates between the input image and the output feature map. Based on the joint probabilities of each of the Gaussian mixture components, we introduce a sample energy based method to score the anomaly of image test patches. A two-stream network framework is employed to combine the appearance and motion anomalies, using RGB frames for the former and dynamic flow images, for the latter. We test our approach on two popular benchmarks (UCSD Dataset and Avenue Dataset). The experimental results verify the superiority of our method compared to the state of the arts.
9b1e6362fcfd298cd382c0c24f282d86f174cac8
Text Classification from Positive and Unlabeled Data using Misclassified Data Correction
This paper addresses the problem of dealing with a collection of labeled training documents, especially annotating negative training documents and presents a method of text classification from positive and unlabeled data. We applied an error detection and correction technique to the results of positive and negative documents classified by the Support Vector Machines (SVM). The results using Reuters documents showed that the method was comparable to the current state-of-the-art biasedSVM method as the F-score obtained by our method was 0.627 and biased-SVM was 0.614.
decb7e746acb87710c2a15585cd22133ffc2cc95
General Video Game AI: Competition, Challenges and Opportunities
The General Video Game AI framework and competition pose the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games. Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game-dependent heuristics. This talk summarizes the motivation, infrastructure, results and future plans of General Video Game AI, stressing the findings and first conclusions drawn after two editions of our competition, presenting the tracks that will be held in 2016 and outlining our future plans.
418e0760a75b0797ee355d4ca4f6db83df664f0f
Piaget : Implications for Teaching
Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact support@jstor.org.
03e8abcc388cc41b590d04289deafbf6075fdadf
Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation
Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems. However, there are inevitably noises in the external information, severely reducing the benefit that the existing methods could receive from the incorporation. To tackle the problem, this study pays special attention to the discrimination of the noises during the incorporation. We argue that there exist two kinds of noise in this external information, i.e. global noise and local noise, which affect the translations for the whole sentence and for some specific words, respectively. Accordingly, we propose a general framework that learns to jointly discriminate both the global and local noises, so that the external information could be better leveraged. Our model is trained on the dataset derived from the original parallel corpus without any external labeled data or annotation. Experimental results in various real-world scenarios, language pairs, and neural architectures indicate that discriminating noises contributes to significant improvements in translation quality by being able to better incorporate the external information, even in very noisy conditions.
c34bd1038a798a08fff2112a1a8815cd32f74ca1
Combining Multimodal Features with Hierarchical Classifier Fusion for Emotion Recognition in the Wild
Emotion recognition in the wild is a very challenging task. In this paper, we investigate a variety of different multimodal features from video and audio to evaluate their discriminative ability to human emotion analysis. For each clip, we extract SIFT, LBP-TOP, PHOG, LPQ-TOP and audio features. We train different classifiers for every kind of features on the dataset from EmotiW 2014 Challenge, and we propose a novel hierarchical classifier fusion method for all the extracted features. The final achievement we gained on the test set is 47.17% which is much better than the best baseline recognition rate of 33.7%.
9ada9b211cd11406a7d71707598b2a9466fcc8c9
Efficient and robust feature extraction and selection for traffic classification
Given the limitations of traditional classification methods based on port number and payload inspection, a large number of studies have focused on developing classification approaches that use Transport Layer Statistics (TLS) features and Machine Learning (ML) techniques. However, classifying Internet traffic data using these approaches is still a difficult task because (1) TLS features are not very robust for traffic classification because they cannot capture the complex non-linear characteristics of Internet traffic, and (2) the existing Feature Selection (FS) techniques cannot reliably provide optimal and stable features for ML algorithms. With the aim of addressing these problems, this paper presents a novel feature extraction and selection approach. First, multifractal features are extracted from traffic flows using a Wavelet Leaders Multifractal Formalism(WLMF) to depict the traffic flows; next, a Principal Component Analysis (PCA)-based FS method is applied on these multifractal features to remove the irrelevant and redundant features. Based on real traffic traces, the experimental results demonstrate significant improvement in accuracy of Support Vector Machines (SVMs) comparing to the TLS features studied in existing ML-based approaches. Furthermore, the proposed approach is suitable for real time traffic classification because of the ability of classifying traffic at the early stage of traffic transmission.
6b7370613bca4047addf8fba1e3a465c47cef4f3
Unsupervised Interpretable Pattern Discovery in Time Series Using Autoencoders
We study the use of feed-forward convolutional neural networks for the unsupervised problem of mining recurrent temporal patterns mixed in multivariate time series. Traditional convolutional autoencoders lack interpretability for two main reasons: the number of patterns corresponds to the manually-fixed number of convolution filters, and the patterns are often redundant and correlated. To recover clean patterns, we introduce different elements in the architecture, including an adaptive rectified linear unit function that improves patterns interpretability, and a group-lasso regularizer that helps automatically finding the relevant number of patterns. We illustrate the necessity of these elements on synthetic data and real data in the context of activity mining in videos.
f5af95698fe16f17aeee452e7bf5463c0ce1b1c5
A Comparison of Controllers for Balancing Two Wheeled Inverted Pendulum Robot
One of the challenging tasks concerning two wheeled inverted pendulum (TWIP) mobile robot is balancing its tilt to upright position, this is due to its inherently open loop instability. This paper presents an experimental comparison between model based controller and non-model based controllers in balancing the TWIP mobile robot. A Fuzzy Logic Controller (FLC) which is a non-model based controller and a Linear Quadratic Controller (LQR) which is a model-based controller, and the conventional controller, Proportional Integral Derivative (PID) were implemented and compared on a real time TWIP mobile robot. The FLC controller performance has given superior result as compared to LQR and PID in terms of speed response but consumes higher energy. Index Term-Two Wheeled Inverted Pendulum (TWIP), Fuzzy Logic Controller (FLC), Linear Quadratic Controller (LQR), Euler Lagrange equations.
c872aef8c29a4717c04de39298ef7069967ae7a5
Cognitive enhancement.
Cognitive enhancement refers to the improvement of cognitive ability in normal healthy individuals. In this article, we focus on the use of pharmaceutical agents and brain stimulation for cognitive enhancement, reviewing the most common methods of pharmacologic and electronic cognitive enhancement, and the mechanisms by which they are believed to work, the effectiveness of these methods and their prevalence. We note the many gaps in our knowledge of these matters, including open questions about the size, reliability and nature of the enhancing effects, and we conclude with recommendations for further research. WIREs Cogn Sci 2014, 5:95-103. doi: 10.1002/wcs.1250 CONFLICT OF INTEREST: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.
89d09965626167360aea4e414f889b06074491da
Belt: An Unobtrusive Touch Input Device for Head-worn Displays
Belt is a novel unobtrusive input device for wearable displays that incorporates a touch surface encircling the user's hip. The wide input space is leveraged for a horizontal spatial mapping of quickly accessible information and applications. We discuss social implications and interaction capabilities for unobtrusive touch input and present our hardware implementation and a set of applications that benefit from the quick access time. In a qualitative user study with 14 participants we found out that for short interactions (2-4 seconds), most of the surface area is considered as appropriate input space, while for longer interactions (up to 10 seconds), the front areas above the trouser pockets are preferred.
a3a7ba295543b637eac79db24436b96356944375
Redirected Walking in Place
This paper describes a method for allowing people to virtually move around a CAVETM without ever having to turn to face the missing back wall. We describe the method, and report a pilot study of 28 participants, half of whom moved through the virtual world using a hand-held controller, and the other half used the new technique called ‘Redirected Walking in Place’ (RWP). The results show that the current instantiation of the RWP technique does not result in a lower frequency of looking towards the missing wall. However, the results also show that the sense of presence in the virtual environment is significantly and negatively correlated with the amount that the back wall is seen. There is evidence that RWP does reduce the chance of seeing the blank wall for some participants. The increased sense of presence through never having to face the blank wall, and the results of this pilot study show the RWP has promise and merits further development.
036b5edb74849dd68ad44be84f951c139c3b1738
On temporal-spatial realism in the virtual reality environment
The Polhemus Isotrak is often used as an orientation and position tracking device in virtual reality environments. When it is used to dynamically determine the user’s viewpoint and line of sight ( e.g. in the case of a head mounted display) the noise and delay in its measurement data causes temporal-spatial distortion, perceived by the user as jittering of images and lag between head movement and visual feedback. To tackle this problem, we first examined the major cause of the distortion, and found that the lag felt by the user is mainly due to the delay in orientation data, and the jittering of images is caused mostly by the noise in position data. Based on these observations, a predictive Kalman filter was designed to compensate for the delay in orientation data, and an anisotropic low pass filter was devised to reduce the noise in position data. The effectiveness and limitations of both approaches were then studied, and the results shown to be satisfactory.
07d6e71722aac1ad7e51c48955ea5a04fcaedf35
Virtual Reality on a WIM: Interactive Worlds in Miniature
This paper explores a user interface technique which augments an immersive head tracked display with a hand-held miniature copy of the virtual environment. We call this interface technique the Worlds in Miniature (WIM) metaphor. In addition to the first-person perspective offered by a virtual reality system, a World in Miniature offers a second dynamic viewport onto the virtual environment. Objects may be directly manipulated either through the immersive viewport or through the three-dimensional viewport offered by the WIM. In addition to describing object manipulation, this paper explores ways in which Worlds in Miniature can act as a single unifying metaphor for such application independent interaction techniques as object selection, navigation, path planning, and visualization. The WIM metaphor offers multiple points of view and multiple scales at which the user can operate, without requiring explicit modes or commands. Informal user observation indicates that users adapt to the Worlds in Miniature metaphor quickly and that physical props are helpful in manipulating the WIM and other objects in the environment.
0a5ad27461c93fefd2665e550776417f416997d4
Recognizing Textual Entailment via Multi-task Knowledge Assisted LSTM
Recognizing Textual Entailment (RTE) plays an important role in NLP applications like question answering, information retrieval, etc. Most previous works either use classifiers to employ elaborately designed features and lexical similarity or bring distant supervision and reasoning technique into RTE task. However, these approaches are hard to generalize due to the complexity of feature engineering and are prone to cascading errors and data sparsity problems. For alleviating the above problems, some work use LSTM-based recurrent neural network with word-by-word attention to recognize textual entailment. Nevertheless, these work did not make full use of knowledge base (KB) to help reasoning. In this paper, we propose a deep neural network architecture called Multi-task Knowledge Assisted LSTM (MKAL), which aims to conduct implicit inference with the assistant of KB and use predicate-topredicate attention to detect the entailment between predicates. In addition, our model applies a multi-task architecture to further improve the performance. The experimental results show that our proposed method achieves a competitive result compared to the previous work.
7bcd8c63eee548a4d269d6572af0d35a837aaea8
Leaf segmentation in plant phenotyping: a collation study
Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape characteristics, the presence of occlusions and variability in leaf shape and pose, as well as imaging conditions, render this problem challenging. The aim of this paper is to compare several leaf segmentation solutions on a unique and first-of-its-kind dataset containing images from typical phenotyping experiments. In particular, we report and discuss methods and findings of a collection of submissions for the first Leaf Segmentation Challenge of the Computer Vision Problems in Plant Phenotyping workshop in 2014. Four methods are presented: three segment leaves by processing the distance transform in an unsupervised fashion, and the other via optimal template selection and Chamfer matching. Overall, we find that although separating plant from background can be accomplished with satisfactory accuracy ( $$>$$ > 90 % Dice score), individual leaf segmentation and counting remain challenging when leaves overlap. Additionally, accuracy is lower for younger leaves. We find also that variability in datasets does affect outcomes. Our findings motivate further investigations and development of specialized algorithms for this particular application, and that challenges of this form are ideally suited for advancing the state of the art. Data are publicly available (online at http://www.plant-phenotyping.org/datasets ) to support future challenges beyond segmentation within this application domain.
09f02eee625b7aa6ba7e6f31cfb56f6d4ddd0fdd
MAPS: A Multi Aspect Personalized POI Recommender System
The evolution of the World Wide Web (WWW) and the smart-phone technologies have played a key role in the revolution of our daily life. The location-based social networks (LBSN) have emerged and facilitated the users to share the check-in information and multimedia contents. The Point of Interest (POI) recommendation system uses the check-in information to predict the most potential check-in locations. The different aspects of the check-in information, for instance, the geographical distance, the category, and the temporal popularity of a POI; and the temporal check-in trends, and the social (friendship) information of a user play a crucial role in an efficient recommendation. In this paper, we propose a fused recommendation model termed MAPS (Multi Aspect Personalized POI Recommender System) which will be the first in our knowledge to fuse the categorical, the temporal, the social and the spatial aspects in a single model. The major contribution of this paper are: (i) it realizes the problem as a graph of location nodes with constraints on the category and the distance aspects (i.e. the edge between two locations is constrained by a threshold distance and the category of the locations), (ii) it proposes a multi-aspect fused POI recommendation model, and (iii) it extensively evaluates the model with two real-world data sets.
764c2fb8d8a7972eb0d520db3db53c38668c0c87
A Double-Sided Parallel-Strip Line Push–Pull Oscillator
A novel double-sided parallel-strip line (DSPSL) push-pull oscillator using two identical sub-oscillators on the opposite sides of a dielectric substrate is proposed. The two sub-oscillators, sharing a common DSPSL resonator and common ground in the middle of the substrate, generate out-of-phase fundamental signals and in-phase second harmonics. At the common DSPSL output, the second harmonics are cancelled out while the fundamental signals are well combined. By this design, an additional combiner at the output, as required by the conventional push-pull circuits, is not needed, which greatly reduces the circuit size and simplifies the design procedures of the proposed push-pull oscillator.
f719dac4d748bc8b8f371ec14f72d96be34b2c28
Use of Non-conductive Film (NCF) with Nano-Sized Filler Particles for Solder Interconnect: Research and Development on NCF Material and Process Characterization
As three-dimensional Through-Silicon Via (3D-TSV) packaging is emerging in the semiconductor market to integrate multiple functions in a system for further miniaturization, thermal compression bonding (TCB), which stacks multiple bare chips on top of each other with Cu pillar bumps with solder cap, has become an indispensable new packaging technology. The novel non-conductive film (NCF) described in this paper is an epoxy-type thermosetting material in film form, with lower density of Nano-sized silica filler particles (average particle size is 100 Nano meters). Advantages of this NCF material with Nano-sized fillers include: transparency of the NCF material so that the TCB bonder's image recognition system can easily identify fiducial marks on the chip, ability of the Nano-filler to flow out with the NCF resin during thermal compression bonding, to mitigate filler entrapment between solder joints, which is critical to ensure reliable solder connections, compatibility with fine pitch applications with extremely narrow chip-to-chip and chip-to-substrate gaps to form void-free underfill. Instead of the previous processes of die attach, fluxing, traditional oven mass re-flow, flux cleaning, and capillary underfill (CUF), the current process involves the use of NCF & TCB. The NCF is applied on a wafer in a film form by lamination, and then a wafer is diced with the pre-laminated NCF in pieces. Then they are joined by TCB, with typical parameters of 250°C for 10 seconds with 80N (solder cap on 15μm diameter Cu pillar, 40μm pitch, 1000 bumps). NCF heated by TCB is quickly liquidized to lower its viscosity after a few seconds. Other advantages of NCF is that it has a fluxing function included in the material, which eliminates the need for separate flux apply and flux clean steps, which simplifies process and costs. Also, NCF can control extrusion along the package's edge line since NCF is a half-cured B-stage material which has some mechanical rigidity during handling and lamination. Thus NCF Thermal Compression Flip Chip Bonding method is the vital solution to ensure good reliability with mass productivity for next generation's packaging. The characterization of NCF material and the importance of controlling viscosity and elastic modulus during thermal-compression will be analyzed and discussed in this paper. Process and reliability data on test vehicles will be shown.
4e821539277add3f2583845864cc6741216f0328
Asymmetry-Aware Link Quality Services in Wireless Sensor Networks
Recent study in wireless sensor networks (WSN) has found that the irregular link quality is a common phenomenon. The irregular link quality, especially link asymmetry, has significant impacts on the design of WSN protocols, such as MAC protocols, neighborhood and topology discovery protocols, and routing protocols. In this paper, we propose asymmetryaware link quality services, including the neighborhood link quality service (NLQS) and the link relay service (LRS), to provide the timeliness link quality information of neighbors, and build a relay framework to alleviate effects of the link asymmetry. To demonstrate proposed link quality services, we design and implement two example applications, the shortest hops routing tree (SHRT) and the best path reliability routing tree (BRRT), in the TinyOS platform. To evaluate proposed link quality services, we have conducted both static analysis and simulation through the TOSSIM simulator, in terms of four performance metrics. We found that the performance of two example applications was improved substantially. More than 40% of nodes identify more outbound neighbors and the percentage of increased outbound neighbors is between 14% and 100%. In SHRT, more than 15% of nodes reduce hops of the routing tree and the percentage of reduced hops is between 14% and 100%. In BRRT, more than 16% of nodes improve the path reliability of the routing tree and the percentage of the improved path reliability is between 2% to 50%.
6b99529d792427f037bf7b415128649943f757e4
Mortality in British vegetarians: review and preliminary results from EPIC-Oxford.
BACKGROUND Three prospective studies have examined the mortality of vegetarians in Britain. OBJECTIVE We describe these 3 studies and present preliminary results on mortality from the European Prospective Investigation into Cancer and Nutrition-Oxford (EPIC-Oxford). DESIGN The Health Food Shoppers Study and the Oxford Vegetarian Study were established in the 1970s and 1980s, respectively; each included about 11 000 subjects and used a short questionnaire on diet and lifestyle. EPIC-Oxford was established in the 1990s and includes about 56 000 subjects who completed detailed food frequency questionnaires. Mortality in all 3 studies was followed though the National Health Service Central Register. RESULTS Overall, the death rates of all the subjects in all 3 studies are much lower than average for the United Kingdom. Standardized mortality ratios (95% CIs) for all subjects were 59% (57%, 61%) in the Health Food Shoppers Study, 52% (49%, 56%) in the Oxford Vegetarian Study, and 39% (37%, 42%) in EPIC-Oxford. Comparing vegetarians with nonvegetarians within each cohort, the death rate ratios (DRRs), adjusted for age, sex and smoking, were 1.03 (0.95, 1.13) in the Health Food Shoppers Study, 1.01 (0.89, 1.14) in the Oxford Vegetarian Study, and 1.05 (0.86, 1.27) in EPIC-Oxford. DRRs for ischemic heart disease in vegetarians compared with nonvegetarians were 0.85 (0.71, 1.01) in the Health Food Shoppers Study, 0.86 (0.67, 1.12) in the Oxford Vegetarian Study, and 0.75 (0.41, 1.37) in EPIC-Oxford. CONCLUSIONS The mortality of both the vegetarians and the nonvegetarians in these studies is low compared with national rates. Within the studies, mortality for major causes of death was not significantly different between vegetarians and nonvegetarians, but the nonsignificant reduction in mortality from ischemic heart disease among vegetarians was compatible with the significant reduction previously reported in a pooled analysis of mortality in Western vegetarians.
56a2d82f1b89304a80fbeea91accba776a07e55a
Cloud-assisted industrial cyber-physical systems: An insight
The development of industrialization and information communication technology (ICT) has deeply changed our way of life. In particular, with the emerging theory of ‘‘Industry 4.0”, the integration of cloud technologies and industrial cyber-physical systems (ICPS) becomes increasingly important, as this will greatly improve the manufacturing chain and business services. In this paper, we first describe the development and character of ICPS. ICPS will inevitably play an important role in manufacturing, sales, and logistics. With the support of the cloud, ICPS development will impact value creation, business models, downstream services, and work organization. Then, we present a service-oriented ICPS model. With the support of the cloud, infrastructure platform and service application, ICPS will promote the manufacturing efficiency, increase quality of production, enable a sustainable industrial system and more environmentally friendly businesses. Thirdly, we focus on some key enabling technologies, which are critical in supporting smart factories. These key enabling technologies will also help companies to realize high quality, high output, and low cost. Finally, we talk about some challenges of ICPS implementation and the future work. 2015 Elsevier B.V. All rights reserved.
c6240d0cb51e0fe7f838f5437463f5eeaf5563d0
Lessons Learned: The Complexity of Accurate Identification of in-Text Citations
The importance of citations is widely recognized by the scientific community. Citations are being used in making a number of vital decisions such as calculating impact factor of journals, calculating impact of a researcher (H-Index), ranking universities and research organizations. Furthermore, citation indexes, along with other criteria, employ citation counts to retrieve and rank relevant research papers. However, citing patterns and in-text citation frequency are not used for such important decisions. The identification of in-text citation from a scientific document is an important problem. However, identification of in-text citation is a tough ask due to the ambiguity between citation tag and content. This research focuses on in-text citation analysis and makes the following specific contributions such as: Provides detailed in-text citation analysis on 16,000 citations of an online journal, reports different pattern of citations-tags and its in-text citations and highlights the problems (mathematical ambiguities, wrong allotments, commonality in content and string variation) in identifying in-text citations from scientific documents. The accurate identification of in-text citations will help information retrieval systems, digital libraries and citation indexes.
238188daf1ceb8000447c4321125a30ad45c55b8
Transimpedance Amplifier ( TIA ) Design for 400 Gb / s Optical Fiber Communications
(ABSTRACT) Analogcircuit/IC design for high speed optical fiber communication is a fairly new research area in Dr. Ha's group. In the first project sponsored by ETRI (Electronics and Telecommunication Research Institute) we started to design the building blocks of receiver for next generation 400 Gb/s optical fiber communication. In this thesis research a transceiver architecture based on 4x100 Gb/s parallel communication is proposed. As part of the receiver, a transimpedance amplifier for 100 Gb/s optical communication is designed, analyzed and simulated. Simulation results demonstrate the excellent feasibility of proposed architecture. always dominated the high speed optical transceiver design because of their inherent properties of high mobility and low noise. But they are power hungry and bulky in size which made them less attractive for highly integrated circuit design. On the contrary, CMOS technology always drew attraction because of low cost, low power dissipation and high level of integration facility. But their notorious parasitic characteristic and inferior noise performance makes high speed transceiver design very challenging. The emergence of nano-scale CMOS offer highly scaled feature sized transistors with transition frequencies exceeding 200 GHz and can improve optical receiver performance significantly. Increasing bandwidth to meet the target data rate is the most challenging task of TIA design especially in CMOS technology. Several CMOS TIA architectures have been published recently [6]-[11] for 40 Gb/s data rate having bandwidth no more than 40 GHz. In contrast to existing works, the goal of this research is to step further and design a single channel stand-alone iii TIA compatible in serial 100 Gb/s data rate with enhanced bandwidth and optimized transimpedance gain, input referred noise and group delay variation. A 100 Gb/s transimpedance amplifier (TIA) for optical receiver front end is designed in this work. To achieve wide bandwidth and low group delay variation a differential TIA with active feedback network is proposed. Proposed design also combines regulated cascode front end, peaking inductors and capacitive degeneration to have wide band response. Simulation results show 70 GHz bandwidth, 42 dBΩ transimpedance gain and 2.8 ps of group delay variation for proposed architecture. Input referred noise current density is 26 pA/√ while the total power dissipation from 1.2V supply is 24mW. Performance of the proposed TIA is compared with other existing TIAs, and the proposed TIA shows significant improvement in bandwidth and group delay variation compared to other existing TIA architectures. iv To my parents v Acknowledgements First …
b58a85e46d365e47ce937ccc09d60fbcd0fc22d4
Gadge me if you can: secure and efficient ad-hoc instruction-level randomization for x86 and ARM
Code reuse attacks such as return-oriented programming are one of the most powerful threats to contemporary software. ASLR was introduced to impede these attacks by dispersing shared libraries and the executable in memory. However, in practice its entropy is rather low and, more importantly, the leakage of a single address reveals the position of a whole library in memory. The recent mitigation literature followed the route of randomization, applied it at different stages such as source code or the executable binary. However, the code segments still stay in one block. In contrast to previous work, our randomization solution, called Xifer, (1) disperses all code (executable and libraries) across the whole address space, (2) re-randomizes the address space for each run, (3) is compatible to code signing, and (4) does neither require offline static analysis nor source-code. Our prototype implementation supports the Linux ELF file format and covers both mainstream processor architectures x86 and ARM. Our evaluation demonstrates that Xifer performs efficiently at load- and during run-time (1.2% overhead).
8c0031cd1df734ac224c8c1daf3ce858140c99d5
Correlated Topic Models
Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than x-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data sets.
10d10df314c1b58f5c83629e73a35185876cd4e2
Multi-task Gaussian Process Prediction
In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a “free-form” covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.
c3e2ad2da16f15d212817f833d7dec238a45154d
Recognizing Human Activities from Raw Accelerometer Data Using Deep Neural Networks
Activity recognition from wearable sensor data has been researched for many years. Previous works usually extracted features manually, which were hand-designed by the researchers, and then were fed into the classifiers as the inputs. Due to the blindness of manually extracted features, it was hard to choose suitable features for the specific classification task. Besides, this heuristic method for feature extraction could not generalize across different application domains, because different application domains needed to extract different features for classification. There was also work that used auto-encoders to learn features automatically and then fed the features into the K-nearest neighbor classifier. However, these features were learned in an unsupervised manner without using the information of the labels, thus might not be related to the specific classification task. In this paper, we recommend deep neural networks (DNNs) for activity recognition, which can automatically learn suitable features. DNNs overcome the blindness of hand-designed features and make use of the precious label information to improve activity recognition performance. We did experiments on three publicly available datasets for activity recognition and compared deep neural networks with traditional methods, including those that extracted features manually and auto-encoders followed by a K-nearest neighbor classifier. The results showed that deep neural networks could generalize across different application domains and got higher accuracy than traditional methods.
730fb0508ddb4dbcbd009f326b0298bfdbe9da8c
Sketch Recognition by Ensemble Matching of Structured Features
Sketch recognition aims to automatically classify human hand sketches of objects into known categories. This has become increasingly a desirable capability due to recent advances in human computer interaction on portable devices. The problem is nontrivial because of the sparse and abstract nature of hand drawings as compared to photographic images of objects, compounded by a highly variable degree of details in human sketches. To this end, we present a method for the representation and matching of sketches by exploiting not only local features but also global structures of sketches, through a star graph based ensemble matching strategy. Different local feature representations were evaluated using the star graph model to demonstrate the effectiveness of the ensemble matching of structured features. We further show that by encapsulating holistic structure matching and learned bag-of-features models into a single framework, notable recognition performance improvement over the state-of-the-art can be observed. Extensive comparative experiments were carried out using the currently largest sketch dataset released by Eitz et al. [15], with over 20,000 sketches of 250 object categories generated by AMT (Amazon Mechanical Turk) crowd-sourcing.
5551e57e8d215519d8a671321d7a0d99e5ad53f0
Measuring complexity using FuzzyEn, ApEn, and SampEn.
This paper compares three related measures of complexity, ApEn, SampEn, and FuzzyEn. Since vectors' similarity is defined on the basis of the hard and sensitive boundary of Heaviside function in ApEn and SampEn, the two families of statistics show high sensitivity to the parameter selection and may be invalid in case of small parameter. Importing the concept of fuzzy sets, we developed a new measure FuzzyEn, where vectors' similarity is defined by fuzzy similarity degree based on fuzzy membership functions and vectors' shapes. The soft and continuous boundaries of fuzzy functions ensure the continuity as well as the validity of FuzzyEn at small parameters. The more details obtained by fuzzy functions also make FuzzyEn a more accurate entropy definition than ApEn and SampEn. In addition, similarity definition based on vectors' shapes, together with the exclusion of self-matches, earns FuzzyEn stronger relative consistency and less dependence on data length. Both theoretical analysis and experimental results show that FuzzyEn provides an improved evaluation of signal complexity and can be more conveniently and powerfully applied to short time series contaminated by noise.
4d276039b421dfcd9328f452c20a890d3ed2ac96
Cannabis, a complex plant: different compounds and different effects on individuals.
Cannabis is a complex plant, with major compounds such as delta-9-tetrahydrocannabinol and cannabidiol, which have opposing effects. The discovery of its compounds has led to the further discovery of an important neurotransmitter system called the endocannabinoid system. This system is widely distributed in the brain and in the body, and is considered to be responsible for numerous significant functions. There has been a recent and consistent worldwide increase in cannabis potency, with increasing associated health concerns. A number of epidemiological research projects have shown links between dose-related cannabis use and an increased risk of development of an enduring psychotic illness. However, it is also known that not everyone who uses cannabis is affected adversely in the same way. What makes someone more susceptible to its negative effects is not yet known, however there are some emerging vulnerability factors, ranging from certain genes to personality characteristics. In this article we first provide an overview of the biochemical basis of cannabis research by examining the different effects of the two main compounds of the plant and the endocannabinoid system, and then go on to review available information on the possible factors explaining variation of its effects upon different individuals.
144fea3fd43bd5ef6569768925425e5607afa1f0
Insertion, Deletion, or Substitution? Normalizing Text Messages without Pre-categorization nor Supervision
Most text message normalization approaches are based on supervised learning and rely on human labeled training data. In addition, the nonstandard words are often categorized into different types and specific models are designed to tackle each type. In this paper, we propose a unified letter transformation approach that requires neither pre-categorization nor human supervision. Our approach models the generation process from the dictionary words to nonstandard tokens under a sequence labeling framework, where each letter in the dictionary word can be retained, removed, or substituted by other letters/digits. To avoid the expensive and time consuming hand labeling process, we automatically collected a large set of noisy training pairs using a novel webbased approach and performed character-level alignment for model training. Experiments on both Twitter and SMS messages show that our system significantly outperformed the stateof-the-art deletion-based abbreviation system and the jazzy spell checker (absolute accuracy gain of 21.69% and 18.16% over jazzy spell checker on the two test sets respectively).
04d7b7851683809cab561d09b5c5c80bd5c33c80
What's in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams
QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference.
22f7c40bd3c188e678796d2f1ad9c19a745e83c7
Is imitation learning the route to humanoid robots?
This review investigates two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It is postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor control that could ultimately lead to the creation of autonomous humanoid robots. Imitation learning focuses on three important issues: efficient motor learning, the connection between action and perception, and modular motor control in the form of movement primitives. It is reviewed here how research on representations of, and functional connections between, action and perception have contributed to our understanding of motor acts of other beings. The recent discovery that some areas in the primate brain are active during both movement perception and execution has provided a hypothetical neural basis of imitation. Computational approaches to imitation learning are also described, initially from the perspective of traditional AI and robotics, but also from the perspective of neural network models and statistical-learning research. Parallels and differences between biological and computational approaches to imitation are highlighted and an overview of current projects that actually employ imitation learning for humanoid robots is given.
248040fa359a9f18527e28687822cf67d6adaf16
A survey of robot learning from demonstration
We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research.
37ca75f5f6664fc9ed835a53b48258ec92eb73cd
Learning Dependency-Based Compositional Semantics
Suppose we want to build a system that answers a natural language question by representing its semantics as a logical forxm and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps questions to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive.Our goal is to instead learn a semantic parser from question–answer pairs, where the logical form is modeled as a latent variable. We develop a new semantic formalism, dependency-based compositional semantics (DCS) and define a log-linear distribution over DCS logical forms. The model parameters are estimated using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, we show that our system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.
900cfd2af153772ffe0db3e60a6e2b9ec381e12f
Understanding physiological responses to stressors during physical activity
With advances in physiological sensors, we are able to understand people's physiological status and recognize stress to provide beneficial services. Despite the great potential in physiological stress recognition, there are some critical issues that need to be addressed such as the sensitivity and variability of physiology to many factors other than stress (e.g., physical activity). To resolve these issues, in this paper, we focus on the understanding of physiological responses to both stressor and physical activity and perform stress recognition, particularly in situations having multiple stimuli: physical activity and stressors. We construct stress models that correspond to individual situations, and we validate our stress modeling in the presence of physical activity. Analysis of our experiments provides an understanding on how physiological responses change with different stressors and how physical activity confounds stress recognition with physiological responses. In both objective and subjective settings, the accuracy of stress recognition drops by more than 14% when physical activity is performed. However, by modularizing stress models with respect to physical activity, we can recognize stress with accuracies of 82% (objective stress) and 87% (subjective stress), achieving more than a 5-10% improvement from approaches that do not take physical activity into account.
3572b462c94b5aba749f567628606c46fa124118
Identifying Learning Styles in Learning Management Systems by Using Indications from Students' Behaviour
Making students aware of their learning styles and presenting them with learning material that incorporates their individual learning styles has potential to make learning easier for students and increase their learning progress. This paper proposes an automatic approach for identifying learning styles with respect to the Felder-Silverman learning style model by inferring their learning styles from their behaviour during they are learning in an online course. The approach was developed for learning management systems, which are commonly used in e-learning. In order to evaluate the proposed approach, a study with 127 students was performed, comparing the results of the automatic approach with those of a learning style questionnaire. The evaluation yielded good results and demonstrated that the proposed approach is suitable for identifying learning styles. By using the proposed approach, studentspsila learning styles can be identified automatically and be used for supporting students by considering their individual learning styles.
24acb0110e57de29f5be55f52887b3cd41d1bf12
Disentangling top-down vs. bottom-up and low-level vs. high-level influences on eye movements over time
Bottom-up and top-down, as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analysing their influence over time. For this purpose we develop a saliency model which is based on the internal representation of a recent early spatial vision model to measure the low-level bottom-up factor. To measure the influence of high-level bottom-up features, we use a recent DNN-based saliency model. To account for top-down influences, we evaluate the models on two large datasets with different tasks: first, a memorisation task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: The first saccade, an initial guided exploration characterised by a gradual broadening of the fixation density, and an steady state which is reached after roughly 10 fixations. Saccade target selection during the initial exploration and in the steady state are related to similar areas of interest, which are better predicted when including high-level features. In the search dataset, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties and as early as 200ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later this high-level bottom-up control can be overruled by top-down influences.
89c44bb1af32a27e020a703e02bbe738158081cd
Workspace characterization for concentric tube continuum robots
Concentric tube robots exhibit complex workspaces due to the way their component tubes bend and twist as they interact with one another. This paper explores ways to compute and characterize their workspaces. We use Monte Carlo random samples of the robot's joint space and a discrete volumetric workspace representation, which can describe both reachability and redundancy. Experiments on two physical prototypes are provided to illustrate the proposed approach.
fc73204706bf79a52ba7b65bbf2cf77fa5072799
The use of vacuum assisted closure (VAC™) in soft tissue injuries after high energy pelvic trauma
Application of vacuum-assisted closure (VAC™) in soft tissue defects after high-energy pelvic trauma is described as a retrospective study in a level one trauma center. Between 2002 and 2004, 13 patients were treated for severe soft tissue injuries in the pelvic region. All musculoskeletal injuries were treated with multiple irrigation and debridement procedures and broad-spectrum antibiotics. VAC™ was applied as a temporary coverage for defects and wound conditioning. The injuries included three patients with traumatic hemipelvectomies. Seven patients had pelvic ring fractures with five Morel–Lavallee lesions and two open pelviperineal trauma. One patient suffered from an open iliac crest fracture and a Morel–Lavallee lesion. Two patients sustained near complete pertrochanteric amputations of the lower limb. The average injury severity score was 34.1 ± 1.4. The application of VAC™ started in average 3.8 ± 0.4 days after trauma and was used for 15.5 ± 1.8 days. The dressing changes were performed in average every 3 days. One patient (8%) with a traumatic hemipelvectomy died in the course of treatment due to septic complications. High-energy trauma causing severe soft tissues injuries requires multiple operative debridements to prevent high morbidity and mortality rates. The application of VAC™ as temporary coverage of large tissue defects in pelvic regions supports wound conditioning and facilitates the definitive wound closure.
71b090c082cd80ca82d5e8170cc08f13e2e85837
Evaluating the Impact of Social Selfishness on the Epidemic Routing in Delay Tolerant Networks
To cope with the uncertainty of transmission opportunities between mobile nodes, Delay Tolerant Networks (DTN) routing exploits opportunistic forwarding mechanism. This mechanism requires nodes to forward messages in a cooperative and altruistic way. However, in the real world, most of the nodes exhibit selfish behaviors such as individual and social selfishness. In this paper, we investigate the problem of how social selfishness influences the performance of epidemic routing in DTN. First, we model the message delivery process with social selfishness as a two dimensional continuous time Markov chain. Then, we obtain the system performance of message delivery delay and delivery cost by explicit expressions. Numerical results show that DTN is quite robust to social selfishness, which increases the message delivery delay, but there is more reducing of delivery cost.
0dcad1ae3bc99c5f03625255ac4261bc6cbfdf91
Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking
The evaluation of information retrieval (IR) systems over special collections, such as large book repositories, is out of reach of traditional methods that rely upon editorial relevance judgments. Increasingly, the use of crowdsourcing to collect relevance labels has been regarded as a viable alternative that scales with modest costs. However, crowdsourcing suffers from undesirable worker practices and low quality contributions. In this paper we investigate the design and implementation of effective crowdsourcing tasks in the context of book search evaluation. We observe the impact of aspects of the Human Intelligence Task (HIT) design on the quality of relevance labels provided by the crowd. We assess the output in terms of label agreement with a gold standard data set and observe the effect of the crowdsourced relevance judgments on the resulting system rankings. This enables us to observe the effect of crowdsourcing on the entire IR evaluation process. Using the test set and experimental runs from the INEX 2010 Book Track, we find that varying the HIT design, and the pooling and document ordering strategies leads to considerable differences in agreement with the gold set labels. We then observe the impact of the crowdsourced relevance label sets on the relative system rankings using four IR performance metrics. System rankings based on MAP and Bpref remain less affected by different label sets while the Precision@10 and nDCG@10 lead to dramatically different system rankings, especially for labels acquired from HITs with weaker quality controls. Overall, we find that crowdsourcing can be an effective tool for the evaluation of IR systems, provided that care is taken when designing the HITs.
1f8b930d3a19f8b2ed37808d9e5c2344fad1942e
Information quality benchmarks: product and service performance
Information quality (IQ) is an inexact science in terms of assessment and benchmarks. Although various aspects of quality and information have been investigated [1, 4, 6, 7, 9, 12], there is still a critical need for a methodology that assesses how well organizations develop information products and deliver information services to consumers. Benchmarks developed from such a methodology can help compare information quality across organizations, and provide a baseline for assessing IQ improvements.
22f31560263e4723a7f16ae6313109b43e0944d3
Recoding, storage, rehearsal and grouping in verbal short-term memory: an fMRI study
Functional magnetic resonance imaging (fMRI) of healthy volunteers is used to localise the processes involved in verbal short-term memory (VSTM) for sequences of visual stimuli. Specifically, the brain areas underlying (i) recoding, (ii) storage, (iii) rehearsal and (iv) temporal grouping are investigated. Successive subtraction of images obtained from five tasks revealed a network of left-lateralised areas, including posterior temporal regions, supramarginal gyri, Broca's area and dorsolateral premotor cortex. The results are discussed in relation to neuropsychological distinctions between recoding and rehearsal, previous neuroimaging studies of storage and rehearsal, and, in particular, a recent connectionist model of VSTM that makes explicit assumptions about the temporal organisation of rehearsal. The functional modules of this model are tentatively mapped onto the brain in light of the imaging results. Our findings are consistent with the representation of verbal item information in left posterior temporal areas and short-term storage of phonological information in left supramarginal gyrus. They also suggest that left dorsolateral premotor cortex is involved in the maintenance of temporal order, possibly as the location of a timing signal used in the rhythmic organisation of rehearsal, whereas Broca's area supports the articulatory processes required for phonological recoding of visual stimuli.
d4a051278307269ce63a8822e1b08b84a5c543e4
Discourse Annotation of Non-native Spontaneous Spoken Responses Using the Rhetorical Structure Theory Framework
The availability of the Rhetorical Structure Theory (RST) Discourse Treebank has spurred substantial research into discourse analysis of written texts; however, limited research has been conducted to date on RST annotation and parsing of spoken language, in particular, nonnative spontaneous speech. Considering that the measurement of discourse coherence is typically a key metric in human scoring rubrics for assessments of spoken language, we initiated a research effort to obtain RST annotations of a large number of non-native spoken responses from a standardized assessment of academic English proficiency. The resulting inter-annotator κ agreements on the three different levels of Span, Nuclearity, and Relation are 0.848, 0.766, and 0.653, respectively. Furthermore, a set of features was explored to evaluate the discourse structure of non-native spontaneous speech based on these annotations; the highest performing feature showed a correlation of 0.612 with scores of discourse coherence provided by expert human raters.
79a630e45169a73d872f4c76b48a020569c41047
Evaluating C-RAN fronthaul functional splits in terms of network level energy and cost savings
The placement of the complete baseband processing in a centralized pool results in high data rate requirement and inflexibility of the fronthaul network, which challenges the energy and cost effectiveness of the cloud radio access network (C-RAN). Recently, redesign of the C-RAN through functional split in the baseband processing chain has been proposed to overcome these challenges. This paper evaluates, by mathematical and simulation methods, different splits with respect to network level energy and cost efficiency having in the mind the expected quality of service. The proposed mathematical model quantifies the multiplexing gains and the trade-offs between centralization and decentralization concerning the cost of the pool, fronthaul network capacity and resource utilization. The event-based simulation captures the influence of the traffic load dynamics and traffic type variation on designing an efficient fronthaul network. Based on the obtained results, we derive a principle for fronthaul dimensioning based on the traffic profile. This principle allows for efficient radio access network with respect to multiplexing gains while achieving the expected users' quality of service.
0a3d4fe59e92e486e5d00aba157f3fdfdad0e0c5
Classes of Multiagent Q-learning Dynamics with epsilon-greedy Exploration
Q-learning in single-agent environments is known to converge in the limit given sufficient exploration. The same algorithm has been applied, with some success, in multiagent environments, where traditional analysis techniques break down. Using established dynamical systems methods, we derive and study an idealization of Q-learning in 2-player 2-action repeated general-sum games. In particular, we address the discontinuous case of -greedy exploration and use it as a proxy for value-based algorithms to highlight a contrast with existing results in policy search. Analogously to previous results for gradient ascent algorithms, we provide a complete catalog of the convergence behavior of the -greedy Q-learning algorithm by introducing new subclasses of these games. We identify two subclasses of Prisoner’s Dilemma-like games where the application of Q-learning with -greedy exploration results in higher-than-Nash average payoffs for some initial conditions.
b927e4a17a0bf5624c0438308093969957cd764e
Behaviour Analysis of Multilayer Perceptrons with Multiple Hidden Neurons and Hidden Layers
—The terms " Neural Network " (NN) and " Artificial Neural Network " (ANN) usually refer to a Multilayer Perceptron Network. It process the records one at a time, and "learn" by comparing their prediction of the record with the known actual record. The problem of model selection is considerably important for acquiring higher levels of generalization capability in supervised learning. This paper discussed behavioral analysis of different number of hidden layers and different number of hidden neurons. It's very difficult to select number of hidden layers and hidden neurons. There are different methods like Akaike's Information Criterion, Inverse test method and some traditional methods are used to find Neural Network architecture. What to do while neural network is not getting train or errors are not getting reduced. To reduce Neural Network errors, what we have to do with Neural Network architecture. These types of techniques are discussed and also discussed experiment and result. To solve different problems a neural network should be trained to perform correct classification..
5bfb9f011d5e5414d6d5463786bdcbaee7292737
Chemical crystal identification with deep learning machine vision
This study was carried out with the purpose of testing the ability of deep learning machine vision to identify microscopic objects and geometries found in chemical crystal structures. A database of 6994 images taken with a light microscope showing microscopic crystal details of selected chemical compounds along with 180 images of an unknown chemical was created to train and test, respectively the deep learning models. The models used were GoogLeNet (22 layers deep network) and VGG-16 (16 layers deep network), based on the Caffe framework (University of California, Berkeley, CA) of the DIGITS platform (NVIDIA Corporation, Santa Clara, CA). The two models were successfully trained with the images, having validation accuracy values of 97.38% and 99.65% respectively. Finally, both models were able to correctly identify the unknown chemical sample with a high probability score of 93.34% (GoogLeNet) and 99.41% (VGG-16). The positive results found in this study can be further applied to other unknown sample identification tasks using light microscopy coupled with deep learning machine vision.
0743af243b8912abfde5a75bcb9147d3734852b5
Opinion mining from student feedback data using supervised learning algorithms
This paper explores opinion mining using supervised learning algorithms to find the polarity of the student feedback based on pre-defined features of teaching and learning. The study conducted involves the application of a combination of machine learning and natural language processing techniques on student feedback data gathered from module evaluation survey results of Middle East College, Oman. In addition to providing a step by step explanation of the process of implementation of opinion mining from student comments using the open source data analytics tool Rapid Miner, this paper also presents a comparative performance study of the algorithms like SVM, Naïve Bayes, K Nearest Neighbor and Neural Network classifier. The data set extracted from the survey is subjected to data preprocessing which is then used to train the algorithms for binomial classification. The trained models are also capable of predicting the polarity of the student comments based on extracted features like examination, teaching etc. The results are compared to find the better performance with respect to various evaluation criteria for the different algorithms.
c9aa18b67ffda7a867cf431ff0b382a60ac8998c
Physical experience enhances science learning.
Three laboratory experiments involving students' behavior and brain imaging and one randomized field experiment in a college physics class explored the importance of physical experience in science learning. We reasoned that students' understanding of science concepts such as torque and angular momentum is aided by activation of sensorimotor brain systems that add kinetic detail and meaning to students' thinking. We tested whether physical experience with angular momentum increases involvement of sensorimotor brain systems during students' subsequent reasoning and whether this involvement aids their understanding. The physical experience, a brief exposure to forces associated with angular momentum, significantly improved quiz scores. Moreover, improved performance was explained by activation of sensorimotor brain regions when students later reasoned about angular momentum. This finding specifies a mechanism underlying the value of physical experience in science education and leads the way for classroom practices in which experience with the physical world is an integral part of learning.
4c9bd91bd044980f5746d623315be5285cc799c9
Enhanced Sphere Tracing
In this paper we present several performance and quality enhancements to classical sphere tracing: First, we propose a safe, over-relaxation-based method for accelerating sphere tracing. Second, a method for dynamically preventing self-intersections upon converting signed distance bounds enables controlling precision and rendering performance. In addition, we present a method for significantly accelerating the sphere tracing intersection test for convex objects that are enclosed in convex bounding volumes. We also propose a screen-space metric for the retrieval of a good intersection point candidate, in case sphere tracing does not converge thus increasing rendering quality without sacrificing performance. Finally, discontinuity artifacts common in sphere tracing are reduced using a fixed-point iteration algorithm. We demonstrate complex scenes rendered in real-time with our method. The methods presented in this paper have more universal applicability beyond rendering procedurally generated scenes in real-time and can also be combined with path-tracing-based global illumination solutions.
e8252ce73e330990b66441842f62f73c8cea56e4
Random Walks on Graphs: a Survey
Dedicated to the marvelous random walk of Paul Erd} os through universities, continents, and mathematics Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling. 0. Introduction Given a graph and a starting point, we select a neighbor of it at random, and move to this neighbor; then we select a neighbor of this point at random, and move to it etc. The (random) sequence of points selected this way is a random walk on the graph. A random walk is a nite Markov chain that is time-reversible (see below). In fact, there is not much diierence between the theory of random walks on graphs and the theory of nite Markov chains; every Markov chain can be viewed as random walk on a directed graph, if we allow weighted edges. Similarly, time-reversible Markov chains can be viewed as random walks on undirected graphs, and symmetric Markov chains, as random walks on regular symmetric graphs. In this paper we'll formulate the results in terms of random walks, and mostly restrict our attention to the undirected case. 2 L. Lovv asz Random walks arise in many models in mathematics and physics. In fact, this is one of those notions that tend to pop up everywhere once you begin to look for them. For example, consider the shuuing of a deck of cards. Construct a graph whose nodes are all permutations of the deck, and two of them are adjacent if they come by one shuue move (depending on how you shuue). Then repeated shuue moves correspond to a random walk on this graph (see Diaconis 20]). The Brownian motion of a dust particle is random walk in the room. Models in statistical mechanics can be viewed as random walks on the set of states. The classical theory of random walks deals with random walks on simple , but innnite graphs, like grids, and studies their qualitative behaviour: does the random walk return to its starting point with probability one? does it return innnitely often? For example, PP olya (1921) proved that …
d15cfd75d77ba7ef8aaed6c584d3f743aa4080fa
An improved auto-tuning scheme for PID controllers.
An improved auto-tuning scheme is proposed for Ziegler-Nichols (ZN) tuned PID controllers (ZNPIDs), which usually provide excessively large overshoots, not tolerable in most of the situations, for high-order and nonlinear processes. To overcome this limitation ZNPIDs are upgraded by some easily interpretable heuristic rules through an online gain modifying factor defined on the instantaneous process states. This study is an extension of our earlier work [Mudi RK., Dey C. Lee TT. An improved auto-tuning scheme for PI controllers. ISA Trans 2008; 47: 45-52] to ZNPIDs, thereby making the scheme suitable for a wide range of processes and more generalized too. The proposed augmented ZNPID (AZNPID) is tested on various high-order linear and nonlinear dead-time processes with improved performance over ZNPID, refined ZNPID (RZNPID), and other schemes reported in the literature. Stability issues are addressed for linear processes. Robust performance of AZNPID is observed while changing its tunable parameters as well as the process dead-time. The proposed scheme is also implemented on a real time servo-based position control system.
38b1eb892e51661cd0e3c9f6c38f1f7f8def1317
Vision: automated security validation of mobile apps at app markets
Smartphones and "app" markets are raising concerns about how third-party applications may misuse or improperly handle users' privacy-sensitive data. Fortunately, unlike in the PC world, we have a unique opportunity to improve the security of mobile applications thanks to the centralized nature of app distribution through popular app markets. Thorough validation of apps applied as part of the app market admission process has the potential to significantly enhance mobile device security. In this paper, we propose AppInspector, an automated security validation system that analyzes apps and generates reports of potential security and privacy violations. We describe our vision for making smartphone apps more secure through automated validation and outline key challenges such as detecting and analyzing security and privacy violations, ensuring thorough test coverage, and scaling to large numbers of apps.
2c68c7faa89b104b78e2850dbade5a81f0743874
A formal study of information retrieval heuristics
Empirical studies of information retrieval methods show that good retrieval performance is closely related to the use of various retrieval heuristics, such as TF-IDF weighting. One basic research question is thus what exactly are these "necessary" heuristics that seem to cause good retrieval performance. In this paper, we present a formal study of retrieval heuristics. We formally define a set of basic desirable constraints that any reasonable retrieval function should satisfy, and check these constraints on a variety of representative retrieval functions. We find that none of these retrieval functions satisfies all the constraints unconditionally. Empirical results show that when a constraint is not satisfied, it often indicates non-optimality of the method, and when a constraint is satisfied only for a certain range of parameter values, its performance tends to be poor when the parameter is out of the range. In general, we find that the empirical performance of a retrieval formula is tightly related to how well it satisfies these constraints. Thus the proposed constraints provide a good explanation of many empirical observations and make it possible to evaluate any existing or new retrieval formula analytically.