Unnamed: 0
int64 0
41k
| title
stringlengths 4
274
| category
stringlengths 5
18
| summary
stringlengths 22
3.66k
| theme
stringclasses 8
values |
---|---|---|---|---|
40,810 | Flying Insect Classification with Inexpensive Sensors | cs.LG | The ability to use inexpensive, noninvasive sensors to accurately classify
flying insects would have significant implications for entomological research,
and allow for the development of many useful applications in vector control for
both medical and agricultural entomology. Given this, the last sixty years have
seen many research efforts on this task. To date, however, none of this
research has had a lasting impact. In this work, we explain this lack of
progress. We attribute the stagnation on this problem to several factors,
including the use of acoustic sensing devices, the over-reliance on the single
feature of wingbeat frequency, and the attempts to learn complex models with
relatively little data. In contrast, we show that pseudo-acoustic optical
sensors can produce vastly superior data, that we can exploit additional
features, both intrinsic and extrinsic to the insect's flight behavior, and
that a Bayesian classification approach allows us to efficiently learn
classification models that are very robust to over-fitting. We demonstrate our
findings with large scale experiments that dwarf all previous works combined,
as measured by the number of insects and the number of species considered. | computer science |
40,811 | Balancing Sparsity and Rank Constraints in Quadratic Basis Pursuit | cs.NA | We investigate the methods that simultaneously enforce sparsity and low-rank
structure in a matrix as often employed for sparse phase retrieval problems or
phase calibration problems in compressive sensing. We propose a new approach
for analyzing the trade off between the sparsity and low rank constraints in
these approaches which not only helps to provide guidelines to adjust the
weights between the aforementioned constraints, but also enables new simulation
strategies for evaluating performance. We then provide simulation results for
phase retrieval and phase calibration cases both to demonstrate the consistency
of the proposed method with other approaches and to evaluate the change of
performance with different weights for the sparsity and low rank structure
constraints. | computer science |
40,812 | Simultaneous Perturbation Algorithms for Batch Off-Policy Search | math.OC | We propose novel policy search algorithms in the context of off-policy, batch
mode reinforcement learning (RL) with continuous state and action spaces. Given
a batch collection of trajectories, we perform off-line policy evaluation using
an algorithm similar to that by [Fonteneau et al., 2010]. Using this
Monte-Carlo like policy evaluator, we perform policy search in a class of
parameterized policies. We propose both first order policy gradient and second
order policy Newton algorithms. All our algorithms incorporate simultaneous
perturbation estimates for the gradient as well as the Hessian of the
cost-to-go vector, since the latter is unknown and only biased estimates are
available. We demonstrate their practicality on a simple 1-dimensional
continuous state space problem. | computer science |
40,813 | Forecasting Popularity of Videos using Social Media | cs.LG | This paper presents a systematic online prediction method (Social-Forecast)
that is capable to accurately forecast the popularity of videos promoted by
social media. Social-Forecast explicitly considers the dynamically changing and
evolving propagation patterns of videos in social media when making popularity
forecasts, thereby being situation and context aware. Social-Forecast aims to
maximize the forecast reward, which is defined as a tradeoff between the
popularity prediction accuracy and the timeliness with which a prediction is
issued. The forecasting is performed online and requires no training phase or a
priori knowledge. We analytically bound the prediction performance loss of
Social-Forecast as compared to that obtained by an omniscient oracle and prove
that the bound is sublinear in the number of video arrivals, thereby
guaranteeing its short-term performance as well as its asymptotic convergence
to the optimal performance. In addition, we conduct extensive experiments using
real-world data traces collected from the videos shared in RenRen, one of the
largest online social networks in China. These experiments show that our
proposed method outperforms existing view-based approaches for popularity
prediction (which are not context-aware) by more than 30% in terms of
prediction rewards. | computer science |
40,814 | AIS-INMACA: A Novel Integrated MACA Based Clonal Classifier for Protein
Coding and Promoter Region Prediction | cs.CE | Most of the problems in bioinformatics are now the challenges in computing.
This paper aims at building a classifier based on Multiple Attractor Cellular
Automata (MACA) which uses fuzzy logic. It is strengthened with an artificial
Immune System Technique (AIS), Clonal algorithm for identifying a protein
coding and promoter region in a given DNA sequence. The proposed classifier is
named as AIS-INMACA introduces a novel concept to combine CA with artificial
immune system to produce a better classifier which can address major problems
in bioinformatics. This will be the first integrated algorithm which can
predict both promoter and protein coding regions. To obtain good fitness rules
the basic concept of Clonal selection algorithm was used. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 89.6%. This classifier was tested with 97,000 data
components which were taken from Fickett & Toung, MPromDb, and other sequences
from a renowned medical university. This proposed classifier can handle huge
data sets and can find protein and promoter regions even in mixed and
overlapped DNA sequences. This work also aims at identifying the logicality
between the major problems in bioinformatics and tries to obtaining a common
frame work for addressing major problems in bioinformatics like protein
structure prediction, RNA structure prediction, predicting the splicing pattern
of any primary transcript and analysis of information content in DNA, RNA,
protein sequences and structure. This work will attract more researchers
towards application of CA as a potential pattern classifier to many important
problems in bioinformatics | computer science |
40,815 | DeepWalk: Online Learning of Social Representations | cs.SI | We present DeepWalk, a novel approach for learning latent representations of
vertices in a network. These latent representations encode social relations in
a continuous vector space, which is easily exploited by statistical models.
DeepWalk generalizes recent advancements in language modeling and unsupervised
feature learning (or deep learning) from sequences of words to graphs. DeepWalk
uses local information obtained from truncated random walks to learn latent
representations by treating walks as the equivalent of sentences. We
demonstrate DeepWalk's latent representations on several multi-label network
classification tasks for social networks such as BlogCatalog, Flickr, and
YouTube. Our results show that DeepWalk outperforms challenging baselines which
are allowed a global view of the network, especially in the presence of missing
information. DeepWalk's representations can provide $F_1$ scores up to 10%
higher than competing methods when labeled data is sparse. In some experiments,
DeepWalk's representations are able to outperform all baseline methods while
using 60% less training data. DeepWalk is also scalable. It is an online
learning algorithm which builds useful incremental results, and is trivially
parallelizable. These qualities make it suitable for a broad class of real
world applications such as network classification, and anomaly detection. | computer science |
40,816 | Comparison of Multi-agent and Single-agent Inverse Learning on a
Simulated Soccer Example | cs.LG | We compare the performance of Inverse Reinforcement Learning (IRL) with the
relative new model of Multi-agent Inverse Reinforcement Learning (MIRL). Before
comparing the methods, we extend a published Bayesian IRL approach that is only
applicable to the case where the reward is only state dependent to a general
one capable of tackling the case where the reward depends on both state and
action. Comparison between IRL and MIRL is made in the context of an abstract
soccer game, using both a game model in which the reward depends only on state
and one in which it depends on both state and action. Results suggest that the
IRL approach performs much worse than the MIRL approach. We speculate that the
underperformance of IRL is because it fails to capture equilibrium information
in the manner possible in MIRL. | computer science |
40,817 | Relevant Feature Selection Model Using Data Mining for Intrusion
Detection System | cs.CR | Network intrusions have become a significant threat in recent years as a
result of the increased demand of computer networks for critical systems.
Intrusion detection system (IDS) has been widely deployed as a defense measure
for computer networks. Features extracted from network traffic can be used as
sign to detect anomalies. However with the huge amount of network traffic,
collected data contains irrelevant and redundant features that affect the
detection rate of the IDS, consumes high amount of system resources, and
slowdown the training and testing process of the IDS. In this paper, a new
feature selection model is proposed; this model can effectively select the most
relevant features for intrusion detection. Our goal is to build a lightweight
intrusion detection system by using a reduced features set. Deleting irrelevant
and redundant features helps to build a faster training and testing process, to
have less resource consumption as well as to maintain high detection rates. The
effectiveness and the feasibility of our feature selection model were verified
by several experiments on KDD intrusion detection dataset. The experimental
results strongly showed that our model is not only able to yield high detection
rates but also to speed up the detection process. | computer science |
40,818 | Multi-label Ferns for Efficient Recognition of Musical Instruments in
Recordings | cs.LG | In this paper we introduce multi-label ferns, and apply this technique for
automatic classification of musical instruments in audio recordings. We compare
the performance of our proposed method to a set of binary random ferns, using
jazz recordings as input data. Our main result is obtaining much faster
classification and higher F-score. We also achieve substantial reduction of the
model size. | computer science |
40,819 | Privacy Tradeoffs in Predictive Analytics | cs.CR | Online services routinely mine user data to predict user preferences, make
recommendations, and place targeted ads. Recent research has demonstrated that
several private user attributes (such as political affiliation, sexual
orientation, and gender) can be inferred from such data. Can a
privacy-conscious user benefit from personalization while simultaneously
protecting her private attributes? We study this question in the context of a
rating prediction service based on matrix factorization. We construct a
protocol of interactions between the service and users that has remarkable
optimality properties: it is privacy-preserving, in that no inference algorithm
can succeed in inferring a user's private attribute with a probability better
than random guessing; it has maximal accuracy, in that no other
privacy-preserving protocol improves rating prediction; and, finally, it
involves a minimal disclosure, as the prediction accuracy strictly decreases
when the service reveals less information. We extensively evaluate our protocol
using several rating datasets, demonstrating that it successfully blocks the
inference of gender, age and political affiliation, while incurring less than
5% decrease in the accuracy of rating prediction. | computer science |
40,820 | Complexity of Equivalence and Learning for Multiplicity Tree Automata | cs.LG | We consider the complexity of equivalence and learning for multiplicity tree
automata, i.e., weighted tree automata over a field. We first show that the
equivalence problem is logspace equivalent to polynomial identity testing, the
complexity of which is a longstanding open problem. Secondly, we derive lower
bounds on the number of queries needed to learn multiplicity tree automata in
Angluin's exact learning model, over both arbitrary and fixed fields.
Habrard and Oncina (2006) give an exact learning algorithm for multiplicity
tree automata, in which the number of queries is proportional to the size of
the target automaton and the size of a largest counterexample, represented as a
tree, that is returned by the Teacher. However, the smallest
tree-counterexample may be exponential in the size of the target automaton.
Thus the above algorithm does not run in time polynomial in the size of the
target automaton, and has query complexity exponential in the lower bound.
Assuming a Teacher that returns minimal DAG representations of
counterexamples, we give a new exact learning algorithm whose query complexity
is quadratic in the target automaton size, almost matching the lower bound, and
improving the best previously-known algorithm by an exponential factor. | computer science |
40,821 | Application of Machine Learning Techniques in Aquaculture | cs.CE | In this paper we present applications of different machine learning
algorithms in aquaculture. Machine learning algorithms learn models from
historical data. In aquaculture historical data are obtained from farm
practices, yields, and environmental data sources. Associations between these
different variables can be obtained by applying machine learning algorithms to
historical data. In this paper we present applications of different machine
learning algorithms in aquaculture applications. | computer science |
40,822 | Learning with many experts: model selection and sparsity | stat.ME | Experts classifying data are often imprecise. Recently, several models have
been proposed to train classifiers using the noisy labels generated by these
experts. How to choose between these models? In such situations, the true
labels are unavailable. Thus, one cannot perform model selection using the
standard versions of methods such as empirical risk minimization and cross
validation. In order to allow model selection, we present a surrogate loss and
provide theoretical guarantees that assure its consistency. Next, we discuss
how this loss can be used to tune a penalization which introduces sparsity in
the parameters of a traditional class of models. Sparsity provides more
parsimonious models and can avoid overfitting. Nevertheless, it has seldom been
discussed in the context of noisy labels due to the difficulty in model
selection and, therefore, in choosing tuning parameters. We apply these
techniques to several sets of simulated and real data. | computer science |
40,823 | Efficient classification using parallel and scalable compressed model
and Its application on intrusion detection | cs.LG | In order to achieve high efficiency of classification in intrusion detection,
a compressed model is proposed in this paper which combines horizontal
compression with vertical compression. OneR is utilized as horizontal
com-pression for attribute reduction, and affinity propagation is employed as
vertical compression to select small representative exemplars from large
training data. As to be able to computationally compress the larger volume of
training data with scalability, MapReduce based parallelization approach is
then implemented and evaluated for each step of the model compression process
abovementioned, on which common but efficient classification methods can be
directly used. Experimental application study on two publicly available
datasets of intrusion detection, KDD99 and CMDC2012, demonstrates that the
classification using the compressed model proposed can effectively speed up the
detection procedure at up to 184 times, most importantly at the cost of a
minimal accuracy difference with less than 1% on average. | computer science |
40,824 | Machine Learning in Wireless Sensor Networks: Algorithms, Strategies,
and Applications | cs.NI | Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges. | computer science |
40,825 | Predicting Online Video Engagement Using Clickstreams | cs.LG | In the nascent days of e-content delivery, having a superior product was
enough to give companies an edge against the competition. With today's fiercely
competitive market, one needs to be multiple steps ahead, especially when it
comes to understanding consumers. Focusing on a large set of web portals owned
and managed by a private communications company, we propose methods by which
these sites' clickstream data can be used to provide a deep understanding of
their visitors, as well as their interests and preferences. We further expand
the use of this data to show that it can be effectively used to predict user
engagement to video streams. | computer science |
40,826 | Fast Distributed Coordinate Descent for Non-Strongly Convex Losses | math.OC | We propose an efficient distributed randomized coordinate descent method for
minimizing regularized non-strongly convex loss functions. The method attains
the optimal $O(1/k^2)$ convergence rate, where $k$ is the iteration counter.
The core of the work is the theoretical study of stepsize parameters. We have
implemented the method on Archer - the largest supercomputer in the UK - and
show that the method is capable of solving a (synthetic) LASSO optimization
problem with 50 billion variables. | computer science |
40,827 | Node Classification in Uncertain Graphs | cs.DB | In many real applications that use and analyze networked data, the links in
the network graph may be erroneous, or derived from probabilistic techniques.
In such cases, the node classification problem can be challenging, since the
unreliability of the links may affect the final results of the classification
process. If the information about link reliability is not used explicitly, the
classification accuracy in the underlying network may be affected adversely. In
this paper, we focus on situations that require the analysis of the uncertainty
that is present in the graph structure. We study the novel problem of node
classification in uncertain graphs, by treating uncertainty as a first-class
citizen. We propose two techniques based on a Bayes model and automatic
parameter selection, and show that the incorporation of uncertainty in the
classification process as a first-class citizen is beneficial. We
experimentally evaluate the proposed approach using different real data sets,
and study the behavior of the algorithms under different conditions. The
results demonstrate the effectiveness and efficiency of our approach. | computer science |
40,828 | Coupled Item-based Matrix Factorization | cs.LG | The essence of the challenges cold start and sparsity in Recommender Systems
(RS) is that the extant techniques, such as Collaborative Filtering (CF) and
Matrix Factorization (MF), mainly rely on the user-item rating matrix, which
sometimes is not informative enough for predicting recommendations. To solve
these challenges, the objective item attributes are incorporated as
complementary information. However, most of the existing methods for inferring
the relationships between items assume that the attributes are "independently
and identically distributed (iid)", which does not always hold in reality. In
fact, the attributes are more or less coupled with each other by some implicit
relationships. Therefore, in this pa-per we propose an attribute-based coupled
similarity measure to capture the implicit relationships between items. We then
integrate the implicit item coupling into MF to form the Coupled Item-based
Matrix Factorization (CIMF) model. Experimental results on two open data sets
demonstrate that CIMF outperforms the benchmark methods. | computer science |
40,829 | Automatic large-scale classification of bird sounds is strongly improved
by unsupervised feature learning | cs.SD | Automatic species classification of birds from their sound is a computational
tool of increasing importance in ecology, conservation monitoring and vocal
communication studies. To make classification useful in practice, it is crucial
to improve its accuracy while ensuring that it can run at big data scales. Many
approaches use acoustic measures based on spectrogram-type data, such as the
Mel-frequency cepstral coefficient (MFCC) features which represent a
manually-designed summary of spectral information. However, recent work in
machine learning has demonstrated that features learnt automatically from data
can often outperform manually-designed feature transforms. Feature learning can
be performed at large scale and "unsupervised", meaning it requires no manual
data labelling, yet it can improve performance on "supervised" tasks such as
classification. In this work we introduce a technique for feature learning from
large volumes of bird sound recordings, inspired by techniques that have proven
useful in other domains. We experimentally compare twelve different feature
representations derived from the Mel spectrum (of which six use this
technique), using four large and diverse databases of bird vocalisations, with
a random forest classifier. We demonstrate that MFCCs are of limited power in
this context, leading to worse performance than the raw Mel spectral data.
Conversely, we demonstrate that unsupervised feature learning provides a
substantial boost over MFCCs and Mel spectra without adding computational
complexity after the model has been trained. The boost is particularly notable
for single-label classification tasks at large scale. The spectro-temporal
activations learned through our procedure resemble spectro-temporal receptive
fields calculated from avian primary auditory forebrain. | computer science |
40,830 | Learning Nash Equilibria in Congestion Games | cs.LG | We study the repeated congestion game, in which multiple populations of
players share resources, and make, at each iteration, a decentralized decision
on which resources to utilize. We investigate the following question: given a
model of how individual players update their strategies, does the resulting
dynamics of strategy profiles converge to the set of Nash equilibria of the
one-shot game? We consider in particular a model in which players update their
strategies using algorithms with sublinear discounted regret. We show that the
resulting sequence of strategy profiles converges to the set of Nash equilibria
in the sense of Ces\`aro means. However, strong convergence is not guaranteed
in general. We show that strong convergence can be guaranteed for a class of
algorithms with a vanishing upper bound on discounted regret, and which satisfy
an additional condition. We call such algorithms AREP algorithms, for
Approximate REPlicator, as they can be interpreted as a discrete-time
approximation of the replicator equation, which models the continuous-time
evolution of population strategies, and which is known to converge for the
class of congestion games. In particular, we show that the discounted Hedge
algorithm belongs to the AREP class, which guarantees its strong convergence. | computer science |
40,831 | A RobustICA Based Algorithm for Blind Separation of Convolutive Mixtures | cs.LG | We propose a frequency domain method based on robust independent component
analysis (RICA) to address the multichannel Blind Source Separation (BSS)
problem of convolutive speech mixtures in highly reverberant environments. We
impose regularization processes to tackle the ill-conditioning problem of the
covariance matrix and to mitigate the performance degradation in the frequency
domain. We apply an algorithm to separate the source signals in adverse
conditions, i.e. high reverberation conditions when short observation signals
are available. Furthermore, we study the impact of several parameters on the
performance of separation, e.g. overlapping ratio and window type of the
frequency domain method. We also compare different techniques to solve the
frequency-domain permutation ambiguity. Through simulations and real world
experiments, we verify the superiority of the presented convolutive algorithm
among other BSS algorithms, including recursive regularized ICA (RR ICA),
independent vector analysis (IVA). | computer science |
40,832 | GraphLab: A New Framework For Parallel Machine Learning | cs.LG | Designing and implementing efficient, provably correct parallel machine
learning (ML) algorithms is challenging. Existing high-level parallel
abstractions like MapReduce are insufficiently expressive while low-level tools
like MPI and Pthreads leave ML experts repeatedly solving the same design
challenges. By targeting common patterns in ML, we developed GraphLab, which
improves upon abstractions like MapReduce by compactly expressing asynchronous
iterative algorithms with sparse computational dependencies while ensuring data
consistency and achieving a high degree of parallel performance. We demonstrate
the expressiveness of the GraphLab framework by designing and implementing
parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and
Compressed Sensing. We show that using GraphLab we can achieve excellent
parallel performance on large scale real-world problems. | computer science |
40,833 | R-UCB: a Contextual Bandit Algorithm for Risk-Aware Recommender Systems | cs.IR | Mobile Context-Aware Recommender Systems can be naturally modelled as an
exploration/exploitation trade-off (exr/exp) problem, where the system has to
choose between maximizing its expected rewards dealing with its current
knowledge (exploitation) and learning more about the unknown user's preferences
to improve its knowledge (exploration). This problem has been addressed by the
reinforcement learning community but they do not consider the risk level of the
current user's situation, where it may be dangerous to recommend items the user
may not desire in her current situation if the risk level is high. We introduce
in this paper an algorithm named R-UCB that considers the risk level of the
user's situation to adaptively balance between exr and exp. The detailed
analysis of the experimental results reveals several important discoveries in
the exr/exp behaviour. | computer science |
40,834 | Likely to stop? Predicting Stopout in Massive Open Online Courses | cs.CY | Understanding why students stopout will help in understanding how students
learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we
build accurate predictive models of MOOC student stopout. We document a
scalable, stopout prediction methodology, end to end, from raw source data to
model analysis. We attempted to predict stopout for the Fall 2012 offering of
6.002x. This involved the meticulous and crowd-sourced engineering of over 25
predictive features extracted for thousands of students, the creation of
temporal and non-temporal data representations for use in predictive modeling,
the derivation of over 10 thousand models with a variety of state-of-the-art
machine learning techniques and the analysis of feature importance by examining
over 70000 models. We found that stop out prediction is a tractable problem.
Our models achieved an AUC (receiver operating characteristic
area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one
week in advance. Even with more difficult prediction problems, such as
predicting stop out at the end of the course with only one weeks' data, the
models attained AUCs of 0.7. | computer science |
40,835 | Inverse Reinforcement Learning with Multi-Relational Chains for
Robot-Centered Smart Home | cs.RO | In a robot-centered smart home, the robot observes the home states with its
own sensors, and then it can change certain object states according to an
operator's commands for remote operations, or imitate the operator's behaviors
in the house for autonomous operations. To model the robot's imitation of the
operator's behaviors in a dynamic indoor environment, we use multi-relational
chains to describe the changes of environment states, and apply inverse
reinforcement learning to encoding the operator's behaviors with a learned
reward function. We implement this approach with a mobile robot, and do five
experiments to include increasing training days, object numbers, and action
types. Besides, a baseline method by directly recording the operator's
behaviors is also implemented, and comparison is made on the accuracy of home
state evaluation and the accuracy of robot action selection. The results show
that the proposed approach handles dynamic environment well, and guides the
robot's actions in the house more accurately. | computer science |
40,836 | Down-Sampling coupled to Elastic Kernel Machines for Efficient
Recognition of Isolated Gestures | cs.LG | In the field of gestural action recognition, many studies have focused on
dimensionality reduction along the spatial axis, to reduce both the variability
of gestural sequences expressed in the reduced space, and the computational
complexity of their processing. It is noticeable that very few of these methods
have explicitly addressed the dimensionality reduction along the time axis.
This is however a major issue with regard to the use of elastic distances
characterized by a quadratic complexity. To partially fill this apparent gap,
we present in this paper an approach based on temporal down-sampling associated
to elastic kernel machine learning. We experimentally show, on two data sets
that are widely referenced in the domain of human gesture recognition, and very
different in terms of quality of motion capture, that it is possible to
significantly reduce the number of skeleton frames while maintaining a good
recognition rate. The method proves to give satisfactory results at a level
currently reached by state-of-the-art methods on these data sets. The
computational complexity reduction makes this approach eligible for real-time
applications. | computer science |
40,837 | Computing Multi-Relational Sufficient Statistics for Large Databases | cs.LG | Databases contain information about which relationships do and do not hold
among entities. To make this information accessible for statistical analysis
requires computing sufficient statistics that combine information from
different database tables. Such statistics may involve any number of {\em
positive and negative} relationships. With a naive enumeration approach,
computing sufficient statistics for negative relationships is feasible only for
small databases. We solve this problem with a new dynamic programming algorithm
that performs a virtual join, where the requisite counts are computed without
materializing join tables. Contingency table algebra is a new extension of
relational algebra, that facilitates the efficient implementation of this
M\"obius virtual join operation. The M\"obius Join scales to large datasets
(over 1M tuples) with complex schemas. Empirical evaluation with seven
benchmark datasets showed that information about the presence and absence of
links can be exploited in feature selection, association rule mining, and
Bayesian network learning. | computer science |
40,838 | Recursive Total Least-Squares Algorithm Based on Inverse Power Method
and Dichotomous Coordinate-Descent Iterations | cs.SY | We develop a recursive total least-squares (RTLS) algorithm for
errors-in-variables system identification utilizing the inverse power method
and the dichotomous coordinate-descent (DCD) iterations. The proposed
algorithm, called DCD-RTLS, outperforms the previously-proposed RTLS
algorithms, which are based on the line-search method, with reduced
computational complexity. We perform a comprehensive analysis of the DCD-RTLS
algorithm and show that it is asymptotically unbiased as well as being stable
in the mean. We also find a lower bound for the forgetting factor that ensures
mean-square stability of the algorithm and calculate the theoretical
steady-state mean-square deviation (MSD). We verify the effectiveness of the
proposed algorithm and the accuracy of the predicted steady-state MSD via
simulations. | computer science |
40,839 | A Random Matrix Theoretical Approach to Early Event Detection in Smart
Grid | stat.ME | Power systems are developing very fast nowadays, both in size and in
complexity; this situation is a challenge for Early Event Detection (EED). This
paper proposes a data- driven unsupervised learning method to handle this
challenge. Specifically, the random matrix theories (RMTs) are introduced as
the statistical foundations for random matrix models (RMMs); based on the RMMs,
linear eigenvalue statistics (LESs) are defined via the test functions as the
system indicators. By comparing the values of the LES between the experimental
and the theoretical ones, the anomaly detection is conducted. Furthermore, we
develop 3D power-map to visualize the LES; it provides a robust auxiliary
decision-making mechanism to the operators. In this sense, the proposed method
conducts EED with a pure statistical procedure, requiring no knowledge of
system topologies, unit operation/control models, etc. The LES, as a key
ingredient during this procedure, is a high dimensional indictor derived
directly from raw data. As an unsupervised learning indicator, the LES is much
more sensitive than the low dimensional indictors obtained from supervised
learning. With the statistical procedure, the proposed method is universal and
fast; moreover, it is robust against traditional EED challenges (such as error
accumulations, spurious correlations, and even bad data in core area). Case
studies, with both simulated data and real ones, validate the proposed method.
To manage large-scale distributed systems, data fusion is mentioned as another
data processing ingredient. | computer science |
40,840 | Twitter Hash Tag Recommendation | cs.IR | The rise in popularity of microblogging services like Twitter has led to
increased use of content annotation strategies like the hashtag. Hashtags
provide users with a tagging mechanism to help organize, group, and create
visibility for their posts. This is a simple idea but can be challenging for
the user in practice which leads to infrequent usage. In this paper, we will
investigate various methods of recommending hashtags as new posts are created
to encourage more widespread adoption and usage. Hashtag recommendation comes
with numerous challenges including processing huge volumes of streaming data
and content which is small and noisy. We will investigate preprocessing methods
to reduce noise in the data and determine an effective method of hashtag
recommendation based on the popular classification algorithms. | computer science |
40,841 | Personalized Web Search | cs.IR | Personalization is important for search engines to improve user experience.
Most of the existing work do pure feature engineering and extract a lot of
session-style features and then train a ranking model. Here we proposed a novel
way to model both long term and short term user behavior using Multi-armed
bandit algorithm. Our algorithm can generalize session information across users
well, and as an Explore-Exploit style algorithm, it can generalize to new urls
and new users well. Experiments show that our algorithm can improve performance
over the default ranking and outperforms several popular Multi-armed bandit
algorithms. | computer science |
40,842 | A Simple Expression for Mill's Ratio of the Student's $t$-Distribution | cs.LG | I show a simple expression of the Mill's ratio of the Student's
t-Distribution. I use it to prove Conjecture 1 in P. Auer, N. Cesa-Bianchi, and
P. Fischer. Finite-time analysis of the multiarmed bandit problem. Mach.
Learn., 47(2-3):235--256, May 2002. | computer science |
40,843 | Arrhythmia Detection using Mutual Information-Based Integration Method | cs.CE | The aim of this paper is to propose an application of mutual
information-based ensemble methods to the analysis and classification of heart
beats associated with different types of Arrhythmia. Models of multilayer
perceptrons, support vector machines, and radial basis function neural networks
were trained and tested using the MIT-BIH arrhythmia database. This research
brings a focus to an ensemble method that, to our knowledge, is a novel
application in the area of ECG Arrhythmia detection. The proposed classifier
ensemble method showed improved performance, relative to either majority voting
classifier integration or to individual classifier performance. The overall
ensemble accuracy was 98.25%. | computer science |
40,844 | Equilibrated adaptive learning rates for non-convex optimization | cs.LG | Parameter-specific adaptive learning rate methods are computationally
efficient ways to reduce the ill-conditioning problems encountered when
training large deep networks. Following recent work that strongly suggests that
most of the critical points encountered when training such networks are saddle
points, we find how considering the presence of negative eigenvalues of the
Hessian could help us design better suited adaptive learning rate schemes. We
show that the popular Jacobi preconditioner has undesirable behavior in the
presence of both positive and negative curvature, and present theoretical and
empirical evidence that the so-called equilibration preconditioner is
comparatively better suited to non-convex problems. We introduce a novel
adaptive learning rate scheme, called ESGD, based on the equilibration
preconditioner. Our experiments show that ESGD performs as well or better than
RMSProp in terms of convergence speed, always clearly improving over plain
stochastic gradient descent. | computer science |
40,845 | On Sex, Evolution, and the Multiplicative Weights Update Algorithm | cs.LG | We consider a recent innovative theory by Chastain et al. on the role of sex
in evolution [PNAS'14]. In short, the theory suggests that the evolutionary
process of gene recombination implements the celebrated multiplicative weights
updates algorithm (MWUA). They prove that the population dynamics induced by
sexual reproduction can be precisely modeled by genes that use MWUA as their
learning strategy in a particular coordination game. The result holds in the
environments of \emph{weak selection}, under the assumption that the population
frequencies remain a product distribution.
We revisit the theory, eliminating both the requirement of weak selection and
any assumption on the distribution of the population. Removing the assumption
of product distributions is crucial, since as we show, this assumption is
inconsistent with the population dynamics. We show that the marginal allele
distributions induced by the population dynamics precisely match the marginals
induced by a multiplicative weights update algorithm in this general setting,
thereby affirming and substantially generalizing these earlier results.
We further revise the implications for convergence and utility or fitness
guarantees in coordination games. In contrast to the claim of Chastain et
al.[PNAS'14], we conclude that the sexual evolutionary dynamics does not entail
any property of the population distribution, beyond those already implied by
convergence. | computer science |
40,846 | Dengue disease prediction using weka data mining tool | cs.CY | Dengue is a life threatening disease prevalent in several developed as well
as developing countries like India.In this paper we discuss various algorithm
approaches of data mining that have been utilized for dengue disease
prediction. Data mining is a well known technique used by health organizations
for classification of diseases such as dengue, diabetes and cancer in
bioinformatics research. In the proposed approach we have used WEKA with 10
cross validation to evaluate data and compare results. Weka has an extensive
collection of different machine learning and data mining algorithms. In this
paper we have firstly classified the dengue data set and then compared the
different data mining techniques in weka through Explorer, knowledge flow and
Experimenter interfaces. Furthermore in order to validate our approach we have
used a dengue dataset with 108 instances but weka used 99 rows and 18
attributes to determine the prediction of disease and their accuracy using
classifications of different algorithms to find out the best performance. The
main objective of this paper is to classify data and assist the users in
extracting useful information from data and easily identify a suitable
algorithm for accurate predictive model from it. From the findings of this
paper it can be concluded that Na\"ive Bayes and J48 are the best performance
algorithms for classified accuracy because they achieved maximum accuracy= 100%
with 99 correctly classified instances, maximum ROC = 1, had least mean
absolute error and it took minimum time for building this model through
Explorer and Knowledge flow results | computer science |
40,847 | Optimizing Text Quantifiers for Multivariate Loss Functions | cs.LG | We address the problem of \emph{quantification}, a supervised learning task
whose goal is, given a class, to estimate the relative frequency (or
\emph{prevalence}) of the class in a dataset of unlabelled items.
Quantification has several applications in data and text mining, such as
estimating the prevalence of positive reviews in a set of reviews of a given
product, or estimating the prevalence of a given support issue in a dataset of
transcripts of phone calls to tech support. So far, quantification has been
addressed by learning a general-purpose classifier, counting the unlabelled
items which have been assigned the class, and tuning the obtained counts
according to some heuristics. In this paper we depart from the tradition of
using general-purpose classifiers, and use instead a supervised learning model
for \emph{structured prediction}, capable of generating classifiers directly
optimized for the (multivariate and non-linear) function used for evaluating
quantification accuracy. The experiments that we have run on 5500 binary
high-dimensional datasets (averaging more than 14,000 documents each) show that
this method is more accurate, more stable, and more efficient than existing,
state-of-the-art quantification methods. | computer science |
40,848 | NeuroSVM: A Graphical User Interface for Identification of Liver
Patients | cs.LG | Diagnosis of liver infection at preliminary stage is important for better
treatment. In todays scenario devices like sensors are used for detection of
infections. Accurate classification techniques are required for automatic
identification of disease samples. In this context, this study utilizes data
mining approaches for classification of liver patients from healthy
individuals. Four algorithms (Naive Bayes, Bagging, Random forest and SVM) were
implemented for classification using R platform. Further to improve the
accuracy of classification a hybrid NeuroSVM model was developed using SVM and
feed-forward artificial neural network (ANN). The hybrid model was tested for
its performance using statistical parameters like root mean square error (RMSE)
and mean absolute percentage error (MAPE). The model resulted in a prediction
accuracy of 98.83%. The results suggested that development of hybrid model
improved the accuracy of prediction. To serve the medicinal community for
prediction of liver disease among patients, a graphical user interface (GUI)
has been developed using R. The GUI is deployed as a package in local
repository of R platform for users to perform prediction. | computer science |
40,849 | Adaptive system optimization using random directions stochastic
approximation | math.OC | We present novel algorithms for simulation optimization using random
directions stochastic approximation (RDSA). These include first-order
(gradient) as well as second-order (Newton) schemes. We incorporate both
continuous-valued as well as discrete-valued perturbations into both our
algorithms. The former are chosen to be independent and identically distributed
(i.i.d.) symmetric, uniformly distributed random variables (r.v.), while the
latter are i.i.d., asymmetric, Bernoulli r.v.s. Our Newton algorithm, with a
novel Hessian estimation scheme, requires N-dimensional perturbations and three
loss measurements per iteration, whereas the simultaneous perturbation Newton
search algorithm of [1] requires 2N-dimensional perturbations and four loss
measurements per iteration. We prove the unbiasedness of both gradient and
Hessian estimates and asymptotic (strong) convergence for both first-order and
second-order schemes. We also provide asymptotic normality results, which in
particular establish that the asymmetric Bernoulli variant of Newton RDSA
method is better than 2SPSA of [1]. Numerical experiments are used to validate
the theoretical results. | computer science |
40,850 | Scale-Free Algorithms for Online Linear Optimization | cs.LG | We design algorithms for online linear optimization that have optimal regret
and at the same time do not need to know any upper or lower bounds on the norm
of the loss vectors. We achieve adaptiveness to norms of loss vectors by scale
invariance, i.e., our algorithms make exactly the same decisions if the
sequence of loss vectors is multiplied by any positive constant. Our algorithms
work for any decision set, bounded or unbounded. For unbounded decisions sets,
these are the first truly adaptive algorithms for online linear optimization. | computer science |
40,852 | A provably convergent alternating minimization method for mean field
inference | cs.LG | Mean-Field is an efficient way to approximate a posterior distribution in
complex graphical models and constitutes the most popular class of Bayesian
variational approximation methods. In most applications, the mean field
distribution parameters are computed using an alternate coordinate
minimization. However, the convergence properties of this algorithm remain
unclear. In this paper, we show how, by adding an appropriate penalization
term, we can guarantee convergence to a critical point, while keeping a closed
form update at each step. A convergence rate estimate can also be derived based
on recent results in non-convex optimization. | computer science |
40,853 | A Data Mining framework to model Consumer Indebtedness with
Psychological Factors | cs.LG | Modelling Consumer Indebtedness has proven to be a problem of complex nature.
In this work we utilise Data Mining techniques and methods to explore the
multifaceted aspect of Consumer Indebtedness by examining the contribution of
Psychological Factors, like Impulsivity to the analysis of Consumer Debt. Our
results confirm the beneficial impact of Psychological Factors in modelling
Consumer Indebtedness and suggest a new approach in analysing Consumer Debt,
that would take into consideration more Psychological characteristics of
consumers and adopt techniques and practices from Data Mining. | computer science |
40,854 | The fundamental nature of the log loss function | cs.LG | The standard loss functions used in the literature on probabilistic
prediction are the log loss function, the Brier loss function, and the
spherical loss function; however, any computable proper loss function can be
used for comparison of prediction algorithms. This note shows that the log loss
function is most selective in that any prediction algorithm that is optimal for
a given data sequence (in the sense of the algorithmic theory of randomness)
under the log loss function will be optimal under any computable proper mixable
loss function; on the other hand, there is a data sequence and a prediction
algorithm that is optimal for that sequence under either of the two other
standard loss functions but not under the log loss function. | computer science |
40,855 | Bandit Convex Optimization: sqrt{T} Regret in One Dimension | cs.LG | We analyze the minimax regret of the adversarial bandit convex optimization
problem. Focusing on the one-dimensional case, we prove that the minimax regret
is $\widetilde\Theta(\sqrt{T})$ and partially resolve a decade-old open
problem. Our analysis is non-constructive, as we do not present a concrete
algorithm that attains this regret rate. Instead, we use minimax duality to
reduce the problem to a Bayesian setting, where the convex loss functions are
drawn from a worst-case distribution, and then we solve the Bayesian version of
the problem with a variant of Thompson Sampling. Our analysis features a novel
use of convexity, formalized as a "local-to-global" property of convex
functions, that may be of independent interest. | computer science |
40,856 | Efficient Geometric-based Computation of the String Subsequence Kernel | cs.LG | Kernel methods are powerful tools in machine learning. They have to be
computationally efficient. In this paper, we present a novel Geometric-based
approach to compute efficiently the string subsequence kernel (SSK). Our main
idea is that the SSK computation reduces to range query problem. We started by
the construction of a match list $L(s,t)=\{(i,j):s_{i}=t_{j}\}$ where $s$ and
$t$ are the strings to be compared; such match list contains only the required
data that contribute to the result. To compute efficiently the SSK, we extended
the layered range tree data structure to a layered range sum tree, a
range-aggregation data structure. The whole process takes $ O(p|L|\log|L|)$
time and $O(|L|\log|L|)$ space, where $|L|$ is the size of the match list and
$p$ is the length of the SSK. We present empiric evaluations of our approach
against the dynamic and the sparse programming approaches both on synthetically
generated data and on newswire article data. Such experiments show the
efficiency of our approach for large alphabet size except for very short
strings. Moreover, compared to the sparse dynamic approach, the proposed
approach outperforms absolutely for long strings. | computer science |
40,857 | An Online Convex Optimization Approach to Blackwell's Approachability | cs.GT | The notion of approachability in repeated games with vector payoffs was
introduced by Blackwell in the 1950s, along with geometric conditions for
approachability and corresponding strategies that rely on computing {\em
steering directions} as projections from the current average payoff vector to
the (convex) target set. Recently, Abernethy, Batlett and Hazan (2011) proposed
a class of approachability algorithms that rely on the no-regret properties of
Online Linear Programming for computing a suitable sequence of steering
directions. This is first carried out for target sets that are convex cones,
and then generalized to any convex set by embedding it in a higher-dimensional
convex cone. In this paper we present a more direct formulation that relies on
the support function of the set, along with suitable Online Convex Optimization
algorithms, which leads to a general class of approachability algorithms. We
further show that Blackwell's original algorithm and its convergence follow as
a special case. | computer science |
40,858 | Personalising Mobile Advertising Based on Users Installed Apps | cs.CY | Mobile advertising is a billion pound industry that is rapidly expanding. The
success of an advert is measured based on how users interact with it. In this
paper we investigate whether the application of unsupervised learning and
association rule mining could be used to enable personalised targeting of
mobile adverts with the aim of increasing the interaction rate. Over May and
June 2014 we recorded advert interactions such as tapping the advert or
watching the whole advert video along with the set of apps a user has installed
at the time of the interaction. Based on the apps that the users have installed
we applied k-means clustering to profile the users into one of ten classes. Due
to the large number of apps considered we implemented dimension reduction to
reduced the app feature space by mapping the apps to their iTunes category and
clustered users based on the percentage of their apps that correspond to each
iTunes app category. The clustering was externally validated by investigating
differences between the way the ten profiles interact with the various adverts
genres (lifestyle, finance and entertainment adverts). In addition association
rule mining was performed to find whether the time of the day that the advert
is served and the number of apps a user has installed makes certain profiles
more likely to interact with the advert genres. The results showed there were
clear differences in the way the profiles interact with the different advert
genres and the results of this paper suggest that mobile advert targeting would
improve the frequency that users interact with an advert. | computer science |
40,859 | Normalization based K means Clustering Algorithm | cs.LG | K-means is an effective clustering technique used to separate similar data
into groups based on initial centroids of clusters. In this paper,
Normalization based K-means clustering algorithm(N-K means) is proposed.
Proposed N-K means clustering algorithm applies normalization prior to
clustering on the available data as well as the proposed approach calculates
initial centroids based on weights. Experimental results prove the betterment
of proposed N-K means clustering algorithm over existing K-means clustering
algorithm in terms of complexity and overall performance. | computer science |
40,860 | Scalable Iterative Algorithm for Robust Subspace Clustering | cs.DS | Subspace clustering (SC) is a popular method for dimensionality reduction of
high-dimensional data, where it generalizes Principal Component Analysis (PCA).
Recently, several methods have been proposed to enhance the robustness of PCA
and SC, while most of them are computationally very expensive, in particular,
for high dimensional large-scale data. In this paper, we develop much faster
iterative algorithms for SC, incorporating robustness using a {\em non-squared}
$\ell_2$-norm objective. The known implementations for optimizing the objective
would be costly due to the alternative optimization of two separate objectives:
optimal cluster-membership assignment and robust subspace selection, while the
substitution of one process to a faster surrogate can cause failure in
convergence. To address the issue, we use a simplified procedure requiring
efficient matrix-vector multiplications for subspace update instead of solving
an expensive eigenvector problem at each iteration, in addition to release
nested robust PCA loops. We prove that the proposed algorithm monotonically
converges to a local minimum with approximation guarantees, e.g., it achieves
2-approximation for the robust PCA objective. In our experiments, the proposed
algorithm is shown to converge at an order of magnitude faster than known
algorithms optimizing the same objective, and have outperforms prior subspace
clustering methods in accuracy and running time for MNIST dataset. | computer science |
40,861 | Financial Market Prediction | cs.CE | Given financial data from popular sites like Yahoo and the London Exchange,
the presented paper attempts to model and predict stocks that can be considered
"good investments". Stocks are characterized by 125 features ranging from gross
domestic product to EDIBTA, and are labeled by discrepancies between stock and
market price returns. An artificial neural network (Self-Organizing Map) is
fitted to train on more than a million data points to predict "good
investments" given testing stocks from 2013 and after. | computer science |
40,862 | Scalable Nuclear-norm Minimization by Subspace Pursuit Proximal
Riemannian Gradient | cs.LG | Nuclear-norm regularization plays a vital role in many learning tasks, such
as low-rank matrix recovery (MR), and low-rank representation (LRR). Solving
this problem directly can be computationally expensive due to the unknown rank
of variables or large-rank singular value decompositions (SVDs). To address
this, we propose a proximal Riemannian gradient (PRG) scheme which can
efficiently solve trace-norm regularized problems defined on real-algebraic
variety $\mMLr$ of real matrices of rank at most $r$. Based on PRG, we further
present a simple and novel subspace pursuit (SP) paradigm for general
trace-norm regularized problems without the explicit rank constraint $\mMLr$.
The proposed paradigm is very scalable by avoiding large-rank SVDs. Empirical
studies on several tasks, such as matrix completion and LRR based subspace
clustering, demonstrate the superiority of the proposed paradigms over existing
methods. | computer science |
40,863 | Efficient Learning of Linear Separators under Bounded Noise | cs.LG | We study the learnability of linear separators in $\Re^d$ in the presence of
bounded (a.k.a Massart) noise. This is a realistic generalization of the random
classification noise model, where the adversary can flip each example $x$ with
probability $\eta(x) \leq \eta$. We provide the first polynomial time algorithm
that can learn linear separators to arbitrarily small excess error in this
noise model under the uniform distribution over the unit ball in $\Re^d$, for
some constant value of $\eta$. While widely studied in the statistical learning
theory community in the context of getting faster convergence rates,
computationally efficient algorithms in this model had remained elusive. Our
work provides the first evidence that one can indeed design algorithms
achieving arbitrarily small excess error in polynomial time under this
realistic noise model and thus opens up a new and exciting line of research.
We additionally provide lower bounds showing that popular algorithms such as
hinge loss minimization and averaging cannot lead to arbitrarily small excess
error under Massart noise, even under the uniform distribution. Our work
instead, makes use of a margin based technique developed in the context of
active learning. As a result, our algorithm is also an active learning
algorithm with label complexity that is only a logarithmic the desired excess
error $\epsilon$. | computer science |
40,864 | On Graduated Optimization for Stochastic Non-Convex Problems | cs.LG | The graduated optimization approach, also known as the continuation method,
is a popular heuristic to solving non-convex problems that has received renewed
interest over the last decade. Despite its popularity, very little is known in
terms of theoretical convergence analysis. In this paper we describe a new
first-order algorithm based on graduated optimiza- tion and analyze its
performance. We characterize a parameterized family of non- convex functions
for which this algorithm provably converges to a global optimum. In particular,
we prove that the algorithm converges to an {\epsilon}-approximate solution
within O(1/\epsilon^2) gradient-based steps. We extend our algorithm and
analysis to the setting of stochastic non-convex optimization with noisy
gradient feedback, attaining the same convergence rate. Additionally, we
discuss the setting of zero-order optimization, and devise a a variant of our
algorithm which converges at rate of O(d^2/\epsilon^4). | computer science |
40,865 | More General Queries and Less Generalization Error in Adaptive Data
Analysis | cs.LG | Adaptivity is an important feature of data analysis---typically the choice of
questions asked about a dataset depends on previous interactions with the same
dataset. However, generalization error is typically bounded in a non-adaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC '15) and Hardt and Ullman (FOCS '14) initiated the
formal study of this problem, and gave the first upper and lower bounds on the
achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathcal{P}$ and a
set of $n$ independent samples $x$ is drawn from $\mathcal{P}$. We seek an
algorithm that, given $x$ as input, "accurately" answers a sequence of
adaptively chosen "queries" about the unknown distribution $\mathcal{P}$. How
many samples $n$ must we draw from the distribution, as a function of the type
of queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions towards resolving this question:
*We give upper bounds on the number of samples $n$ that are needed to answer
statistical queries that improve over the bounds of Dwork et al.
*We prove the first upper bounds on the number of samples required to answer
more general families of queries. These include arbitrary low-sensitivity
queries and the important class of convex risk minimization queries.
As in Dwork et al., our algorithms are based on a connection between
differential privacy and generalization error, but we feel that our analysis is
simpler and more modular, which may be useful for studying these questions in
the future. | computer science |
40,866 | Energy Sharing for Multiple Sensor Nodes with Finite Buffers | cs.NI | We consider the problem of finding optimal energy sharing policies that
maximize the network performance of a system comprising of multiple sensor
nodes and a single energy harvesting (EH) source. Sensor nodes periodically
sense the random field and generate data, which is stored in the corresponding
data queues. The EH source harnesses energy from ambient energy sources and the
generated energy is stored in an energy buffer. Sensor nodes receive energy for
data transmission from the EH source. The EH source has to efficiently share
the stored energy among the nodes in order to minimize the long-run average
delay in data transmission. We formulate the problem of energy sharing between
the nodes in the framework of average cost infinite-horizon Markov decision
processes (MDPs). We develop efficient energy sharing algorithms, namely
Q-learning algorithm with exploration mechanisms based on the $\epsilon$-greedy
method as well as upper confidence bound (UCB). We extend these algorithms by
incorporating state and action space aggregation to tackle state-action space
explosion in the MDP. We also develop a cross entropy based method that
incorporates policy parameterization in order to find near optimal energy
sharing policies. Through simulations, we show that our algorithms yield energy
sharing policies that outperform the heuristic greedy method. | computer science |
40,867 | Rank Subspace Learning for Compact Hash Codes | cs.LG | The era of Big Data has spawned unprecedented interests in developing hashing
algorithms for efficient storage and fast nearest neighbor search. Most
existing work learn hash functions that are numeric quantizations of feature
values in projected feature space. In this work, we propose a novel hash
learning framework that encodes feature's rank orders instead of numeric values
in a number of optimal low-dimensional ranking subspaces. We formulate the
ranking subspace learning problem as the optimization of a piece-wise linear
convex-concave function and present two versions of our algorithm: one with
independent optimization of each hash bit and the other exploiting a sequential
learning framework. Our work is a generalization of the Winner-Take-All (WTA)
hash family and naturally enjoys all the numeric stability benefits of rank
correlation measures while being optimized to achieve high precision at very
short code length. We compare with several state-of-the-art hashing algorithms
in both supervised and unsupervised domain, showing superior performance in a
number of data sets. | computer science |
40,868 | Costing Generated Runtime Execution Plans for Large-Scale Machine
Learning Programs | cs.DC | Declarative large-scale machine learning (ML) aims at the specification of ML
algorithms in a high-level language and automatic generation of hybrid runtime
execution plans ranging from single node, in-memory computations to distributed
computations on MapReduce (MR) or similar frameworks like Spark. The
compilation of large-scale ML programs exhibits many opportunities for
automatic optimization. Advanced cost-based optimization techniques
require---as a fundamental precondition---an accurate cost model for evaluating
the impact of optimization decisions. In this paper, we share insights into a
simple and robust yet accurate technique for costing alternative runtime
execution plans of ML programs. Our cost model relies on generating and costing
runtime plans in order to automatically reflect all successive optimization
phases. Costing runtime plans also captures control flow structures such as
loops and branches, and a variety of cost factors like IO, latency, and
computation costs. Finally, we linearize all these cost factors into a single
measure of expected execution time. Within SystemML, this cost model is
leveraged by several advanced optimizers like resource optimization and global
data flow optimization. We share our lessons learned in order to provide
foundations for the optimization of ML programs. | computer science |
40,869 | On Lower and Upper Bounds for Smooth and Strongly Convex Optimization
Problems | math.OC | We develop a novel framework to study smooth and strongly convex optimization
algorithms, both deterministic and stochastic. Focusing on quadratic functions
we are able to examine optimization algorithms as a recursive application of
linear operators. This, in turn, reveals a powerful connection between a class
of optimization algorithms and the analytic theory of polynomials whereby new
lower and upper bounds are derived. Whereas existing lower bounds for this
setting are only valid when the dimensionality scales with the number of
iterations, our lower bound holds in the natural regime where the
dimensionality is fixed. Lastly, expressing it as an optimal solution for the
corresponding optimization problem over polynomials, as formulated by our
framework, we present a novel systematic derivation of Nesterov's well-known
Accelerated Gradient Descent method. This rather natural interpretation of AGD
contrasts with earlier ones which lacked a simple, yet solid, motivation. | computer science |
40,870 | Analysis of Spectrum Occupancy Using Machine Learning Algorithms | cs.NI | In this paper, we analyze the spectrum occupancy using different machine
learning techniques. Both supervised techniques (naive Bayesian classifier
(NBC), decision trees (DT), support vector machine (SVM), linear regression
(LR)) and unsupervised algorithm (hidden markov model (HMM)) are studied to
find the best technique with the highest classification accuracy (CA). A
detailed comparison of the supervised and unsupervised algorithms in terms of
the computational time and classification accuracy is performed. The classified
occupancy status is further utilized to evaluate the probability of secondary
user outage for the future time slots, which can be used by system designers to
define spectrum allocation and spectrum sharing policies. Numerical results
show that SVM is the best algorithm among all the supervised and unsupervised
classifiers. Based on this, we proposed a new SVM algorithm by combining it
with fire fly algorithm (FFA), which is shown to outperform all other
algorithms. | computer science |
40,871 | RankMap: A Platform-Aware Framework for Distributed Learning from Dense
Datasets | cs.DC | This paper introduces RankMap, a platform-aware end-to-end framework for
efficient execution of a broad class of iterative learning algorithms for
massive and dense datasets. Our framework exploits data structure to factorize
it into an ensemble of lower rank subspaces. The factorization creates sparse
low-dimensional representations of the data, a property which is leveraged to
devise effective mapping and scheduling of iterative learning algorithms on the
distributed computing machines. We provide two APIs, one matrix-based and one
graph-based, which facilitate automated adoption of the framework for
performing several contemporary learning applications. To demonstrate the
utility of RankMap, we solve sparse recovery and power iteration problems on
various real-world datasets with up to 1.8 billion non-zeros. Our evaluations
are performed on Amazon EC2 and IBM iDataPlex servers using up to 244 cores.
The results demonstrate up to two orders of magnitude improvements in memory
usage, execution speed, and bandwidth compared with the best reported prior
work, while achieving the same level of learning accuracy. | computer science |
40,872 | Average Distance Queries through Weighted Samples in Graphs and Metric
Spaces: High Scalability with Tight Statistical Guarantees | cs.SI | The average distance from a node to all other nodes in a graph, or from a
query point in a metric space to a set of points, is a fundamental quantity in
data analysis. The inverse of the average distance, known as the (classic)
closeness centrality of a node, is a popular importance measure in the study of
social networks. We develop novel structural insights on the sparsifiability of
the distance relation via weighted sampling. Based on that, we present highly
practical algorithms with strong statistical guarantees for fundamental
problems. We show that the average distance (and hence the centrality) for all
nodes in a graph can be estimated using $O(\epsilon^{-2})$ single-source
distance computations. For a set $V$ of $n$ points in a metric space, we show
that after preprocessing which uses $O(n)$ distance computations we can compute
a weighted sample $S\subset V$ of size $O(\epsilon^{-2})$ such that the average
distance from any query point $v$ to $V$ can be estimated from the distances
from $v$ to $S$. Finally, we show that for a set of points $V$ in a metric
space, we can estimate the average pairwise distance using $O(n+\epsilon^{-2})$
distance computations. The estimate is based on a weighted sample of
$O(\epsilon^{-2})$ pairs of points, which is computed using $O(n)$ distance
computations. Our estimates are unbiased with normalized mean square error
(NRMSE) of at most $\epsilon$. Increasing the sample size by a $O(\log n)$
factor ensures that the probability that the relative error exceeds $\epsilon$
is polynomially small. | computer science |
40,873 | Sparse plus low-rank autoregressive identification in neuroimaging time
series | cs.LG | This paper considers the problem of identifying multivariate autoregressive
(AR) sparse plus low-rank graphical models. Based on the corresponding problem
formulation recently presented, we use the alternating direction method of
multipliers (ADMM) to efficiently solve it and scale it to sizes encountered in
neuroimaging applications. We apply this decomposition on synthetic and real
neuroimaging datasets with a specific focus on the information encoded in the
low-rank structure of our model. In particular, we illustrate that this
information captures the spatio-temporal structure of the original data,
generalizing classical component analysis approaches. | computer science |
40,874 | Comparison of Bayesian predictive methods for model selection | stat.ME | The goal of this paper is to compare several widely used Bayesian model
selection methods in practical model selection problems, highlight their
differences and give recommendations about the preferred approaches. We focus
on the variable subset selection for regression and classification and perform
several numerical experiments using both simulated and real world data. The
results show that the optimization of a utility estimate such as the
cross-validation (CV) score is liable to finding overfitted models due to
relatively high variance in the utility estimates when the data is scarce. This
can also lead to substantial selection induced bias and optimism in the
performance evaluation for the selected model. From a predictive viewpoint,
best results are obtained by accounting for model uncertainty by forming the
full encompassing model, such as the Bayesian model averaging solution over the
candidate models. If the encompassing model is too complex, it can be robustly
simplified by the projection method, in which the information of the full model
is projected onto the submodels. This approach is substantially less prone to
overfitting than selection based on CV-score. Overall, the projection method
appears to outperform also the maximum a posteriori model and the selection of
the most probable variables. The study also demonstrates that the model
selection can greatly benefit from using cross-validation outside the searching
process both for guiding the model size selection and assessing the predictive
performance of the finally selected model. | computer science |
40,875 | Founding Digital Currency on Imprecise Commodity | cs.CY | Current digital currency schemes provide instantaneous exchange on precise
commodity, in which "precise" means a buyer can possibly verify the function of
the commodity without error. However, imprecise commodities, e.g. statistical
data, with error existing are abundant in digital world. Existing digital
currency schemes do not offer a mechanism to help the buyer for payment
decision on precision of commodity, which may lead the buyer to a dilemma
between having to buy and being unconfident. In this paper, we design a
currency schemes IDCS for imprecise digital commodity. IDCS completes a trade
in three stages of handshake between a buyer and providers. We present an IDCS
prototype implementation that assigns weights on the trustworthy of the
providers, and calculates a confidence level for the buyer to decide the
quality of a imprecise commodity. In experiment, we characterize the
performance of IDCS prototype under varying impact factors. | computer science |
40,876 | Learning Definite Horn Formulas from Closure Queries | cs.LG | A definite Horn theory is a set of n-dimensional Boolean vectors whose
characteristic function is expressible as a definite Horn formula, that is, as
conjunction of definite Horn clauses. The class of definite Horn theories is
known to be learnable under different query learning settings, such as learning
from membership and equivalence queries or learning from entailment. We propose
yet a different type of query: the closure query. Closure queries are a natural
extension of membership queries and also a variant, appropriate in the context
of definite Horn formulas, of the so-called correction queries. We present an
algorithm that learns conjunctions of definite Horn clauses in polynomial time,
using closure and equivalence queries, and show how it relates to the canonical
Guigues-Duquenne basis for implicational systems. We also show how the
different query models mentioned relate to each other by either showing
full-fledged reductions by means of query simulation (where possible), or by
showing their connections in the context of particular algorithms that use them
for learning definite Horn formulas. | computer science |
40,877 | Extracting Implicit Social Relation for Social Recommendation Techniques
in User Rating Prediction | cs.SI | Recommendation plays an increasingly important role in our daily lives.
Recommender systems automatically suggest items to users that might be
interesting for them. Recent studies illustrate that incorporating social trust
in Matrix Factorization methods demonstrably improves accuracy of rating
prediction. Such approaches mainly use the trust scores explicitly expressed by
users. However, it is often challenging to have users provide explicit trust
scores of each other. There exist quite a few works, which propose Trust
Metrics to compute and predict trust scores between users based on their
interactions. In this paper, first we present how social relation can be
extracted from users' ratings to items by describing Hellinger distance between
users in recommender systems. Then, we propose to incorporate the predicted
trust scores into social matrix factorization models. By analyzing social
relation extraction from three well-known real-world datasets, which both:
trust and recommendation data available, we conclude that using the implicit
social relation in social recommendation techniques has almost the same
performance compared to the actual trust scores explicitly expressed by users.
Hence, we build our method, called Hell-TrustSVD, on top of the
state-of-the-art social recommendation technique to incorporate both the
extracted implicit social relations and ratings given by users on the
prediction of items for an active user. To the best of our knowledge, this is
the first work to extend TrustSVD with extracted social trust information. The
experimental results support the idea of employing implicit trust into matrix
factorization whenever explicit trust is not available, can perform much better
than the state-of-the-art approaches in user rating prediction. | computer science |
40,878 | Understanding and Optimizing the Performance of Distributed Machine
Learning Applications on Apache Spark | cs.DC | In this paper we explore the performance limits of Apache Spark for machine
learning applications. We begin by analyzing the characteristics of a
state-of-the-art distributed machine learning algorithm implemented in Spark
and compare it to an equivalent reference implementation using the high
performance computing framework MPI. We identify critical bottlenecks of the
Spark framework and carefully study their implications on the performance of
the algorithm. In order to improve Spark performance we then propose a number
of practical techniques to alleviate some of its overheads. However, optimizing
computational efficiency and framework related overheads is not the only key to
performance -- we demonstrate that in order to get the best performance out of
any implementation it is necessary to carefully tune the algorithm to the
respective trade-off between computation time and communication latency. The
optimal trade-off depends on both the properties of the distributed algorithm
as well as infrastructure and framework-related characteristics. Finally, we
apply these technical and algorithmic optimizations to three different
distributed linear machine learning algorithms that have been implemented in
Spark. We present results using five large datasets and demonstrate that by
using the proposed optimizations, we can achieve a reduction in the performance
difference between Spark and MPI from 20x to 2x. | computer science |
40,879 | Bridging Medical Data Inference to Achilles Tendon Rupture
Rehabilitation | cs.LG | Imputing incomplete medical tests and predicting patient outcomes are crucial
for guiding the decision making for therapy, such as after an Achilles Tendon
Rupture (ATR). We formulate the problem of data imputation and prediction for
ATR relevant medical measurements into a recommender system framework. By
applying MatchBox, which is a collaborative filtering approach, on a real
dataset collected from 374 ATR patients, we aim at offering personalized
medical data imputation and prediction. In this work, we show the feasibility
of this approach and discuss potential research directions by conducting
initial qualitative evaluations. | computer science |
40,880 | Interactive Prior Elicitation of Feature Similarities for Small Sample
Size Prediction | cs.LG | Regression under the "small $n$, large $p$" conditions, of small sample size
$n$ and large number of features $p$ in the learning data set, is a recurring
setting in which learning from data is difficult. With prior knowledge about
relationships of the features, $p$ can effectively be reduced, but explicating
such prior knowledge is difficult for experts. In this paper we introduce a new
method for eliciting expert prior knowledge about the similarity of the roles
of features in the prediction task. The key idea is to use an interactive
multidimensional-scaling (MDS) type scatterplot display of the features to
elicit the similarity relationships, and then use the elicited relationships in
the prior distribution of prediction parameters. Specifically, for learning to
predict a target variable with Bayesian linear regression, the feature
relationships are used to construct a Gaussian prior with a full covariance
matrix for the regression coefficients. Evaluation of our method in experiments
with simulated and real users on text data confirm that prior elicitation of
feature similarities improves prediction accuracy. Furthermore, elicitation
with an interactive scatterplot display outperforms straightforward elicitation
where the users choose feature pairs from a feature list. | computer science |
40,881 | Clipper: A Low-Latency Online Prediction Serving System | cs.DC | Machine learning is being deployed in a growing number of applications which
demand real-time, accurate, and robust predictions under heavy query load.
However, most machine learning frameworks and systems only address model
training and not deployment.
In this paper, we introduce Clipper, a general-purpose low-latency prediction
serving system. Interposing between end-user applications and a wide range of
machine learning frameworks, Clipper introduces a modular architecture to
simplify model deployment across frameworks and applications. Furthermore, by
introducing caching, batching, and adaptive model selection techniques, Clipper
reduces prediction latency and improves prediction throughput, accuracy, and
robustness without modifying the underlying machine learning frameworks. We
evaluate Clipper on four common machine learning benchmark datasets and
demonstrate its ability to meet the latency, accuracy, and throughput demands
of online serving applications. Finally, we compare Clipper to the TensorFlow
Serving system and demonstrate that we are able to achieve comparable
throughput and latency while enabling model composition and online learning to
improve accuracy and render more robust predictions. | computer science |
40,882 | An Empirical Study of ADMM for Nonconvex Problems | math.OC | The alternating direction method of multipliers (ADMM) is a common
optimization tool for solving constrained and non-differentiable problems. We
provide an empirical study of the practical performance of ADMM on several
nonconvex applications, including l0 regularized linear regression, l0
regularized image denoising, phase retrieval, and eigenvector computation. Our
experiments suggest that ADMM performs well on a broad class of non-convex
problems. Moreover, recently proposed adaptive ADMM methods, which
automatically tune penalty parameters as the method runs, can improve algorithm
efficiency and solution quality compared to ADMM with a non-tuned penalty. | computer science |
40,883 | Joint Bayesian Gaussian discriminant analysis for speaker verification | cs.SD | State-of-the-art i-vector based speaker verification relies on variants of
Probabilistic Linear Discriminant Analysis (PLDA) for discriminant analysis. We
are mainly motivated by the recent work of the joint Bayesian (JB) method,
which is originally proposed for discriminant analysis in face verification. We
apply JB to speaker verification and make three contributions beyond the
original JB. 1) In contrast to the EM iterations with approximated statistics
in the original JB, the EM iterations with exact statistics are employed and
give better performance. 2) We propose to do simultaneous diagonalization (SD)
of the within-class and between-class covariance matrices to achieve efficient
testing, which has broader application scope than the SVD-based efficient
testing method in the original JB. 3) We scrutinize similarities and
differences between various Gaussian PLDAs and JB, complementing the previous
analysis of comparing JB only with Prince-Elder PLDA. Extensive experiments are
conducted on NIST SRE10 core condition 5, empirically validating the
superiority of JB with faster convergence rate and 9-13% EER reduction compared
with state-of-the-art PLDA. | computer science |
40,884 | Corporate Disruption in the Science of Machine Learning | cs.CY | This MSc dissertation considers the effects of the current corporate interest
on researchers in the field of machine learning. Situated within the field's
cyclical history of academic, public and corporate interest, this dissertation
investigates how current researchers view recent developments and negotiate
their own research practices within an environment of increased commercial
interest and funding. The original research consists of in-depth interviews
with 12 machine learning researchers working in both academia and industry.
Building on theory from science, technology and society studies, this
dissertation problematizes the traditional narratives of the neoliberalization
of academic research by allowing the researchers themselves to discuss how
their career choices, working environments and interactions with others in the
field have been affected by the reinvigorated corporate interest of recent
years. | computer science |
40,885 | TF.Learn: TensorFlow's High-level Module for Distributed Machine
Learning | cs.DC | TF.Learn is a high-level Python module for distributed machine learning
inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to
simplify the process of creating, configuring, training, evaluating, and
experimenting a machine learning model. TF.Learn integrates a wide range of
state-of-art machine learning algorithms built on top of TensorFlow's low level
APIs for small to large-scale supervised and unsupervised problems. This module
focuses on bringing machine learning to non-specialists using a general-purpose
high-level language as well as researchers who want to implement, benchmark,
and compare their new methods in a structured environment. Emphasis is put on
ease of use, performance, documentation, and API consistency. | computer science |
40,886 | Identification of Cancer Patient Subgroups via Smoothed Shortest Path
Graph Kernel | cs.CE | Characterizing patient somatic mutations through next-generation sequencing
technologies opens up possibilities for refining cancer subtypes. However,
catalogues of mutations reveal that only a small fraction of the genes are
altered frequently in patients. On the other hand different genomic alterations
may perturb the same pathways. We propose a novel clustering procedure that
quantifies the similarities of patients from their mutational profile on
pathways via a novel graph kernel. We represent each KEGG pathway as an
undirected graph. For each patient the vertex labels are assigned based on her
altered genes. Smoothed shortest path graph kernel (smSPK) evaluates each pair
of patients by comparing their vertex labeled pathway graphs. Our clustering
procedure involves two steps: the smSPK kernel matrix derived for each pathway
are input to kernel k-means algorithm and each pathway is evaluated
individually. In the next step, only those pathways that are successful are
combined in to a single kernel input to kernel k-means to stratify patients.
Evaluating the procedure on simulated data showed that smSPK clusters patients
up to 88\% accuracy. Finally to identify ovarian cancer patient subgroups, we
apply our methodology to the cancer genome atlas ovarian data that involves 481
patients. The identified subgroups are evaluated through survival analysis.
Grouping patients into four clusters results with patients groups that are
significantly different in their survival times ($p$-value $\le 0.005$). | computer science |
40,887 | Feature Learning for Chord Recognition: The Deep Chroma Extractor | cs.SD | We explore frame-level audio feature learning for chord recognition using
artificial neural networks. We present the argument that chroma vectors
potentially hold enough information to model harmonic content of audio for
chord recognition, but that standard chroma extractors compute too noisy
features. This leads us to propose a learned chroma feature extractor based on
artificial neural networks. It is trained to compute chroma features that
encode harmonic information important for chord recognition, while being robust
to irrelevant interferences. We achieve this by feeding the network an audio
spectrum with context instead of a single frame as input. This way, the network
can learn to selectively compensate noise and resolve harmonic ambiguities.
We compare the resulting features to hand-crafted ones by using a simple
linear frame-wise classifier for chord recognition on various data sets. The
results show that the learned feature extractor produces superior chroma
vectors for chord recognition. | computer science |
40,888 | A Fully Convolutional Deep Auditory Model for Musical Chord Recognition | cs.LG | Chord recognition systems depend on robust feature extraction pipelines.
While these pipelines are traditionally hand-crafted, recent advances in
end-to-end machine learning have begun to inspire researchers to explore
data-driven methods for such tasks. In this paper, we present a chord
recognition system that uses a fully convolutional deep auditory model for
feature extraction. The extracted features are processed by a Conditional
Random Field that decodes the final chord sequence. Both processing stages are
trained automatically and do not require expert knowledge for optimising
parameters. We show that the learned auditory system extracts musically
interpretable features, and that the proposed chord recognition system achieves
results on par or better than state-of-the-art algorithms. | computer science |
40,889 | On the Potential of Simple Framewise Approaches to Piano Transcription | cs.SD | In an attempt at exploring the limitations of simple approaches to the task
of piano transcription (as usually defined in MIR), we conduct an in-depth
analysis of neural network-based framewise transcription. We systematically
compare different popular input representations for transcription systems to
determine the ones most suitable for use with neural networks. Exploiting
recent advances in training techniques and new regularizers, and taking into
account hyper-parameter tuning, we show that it is possible, by simple
bottom-up frame-wise processing, to obtain a piano transcriber that outperforms
the current published state of the art on the publicly available MAPS dataset
-- without any complex post-processing steps. Thus, we propose this simple
approach as a new baseline for this dataset, for future transcription research
to build on and improve. | computer science |
40,890 | Neural networks based EEG-Speech Models | cs.SD | In this paper, we propose an end-to-end neural network (NN) based EEG-speech
(NES) modeling framework, in which three network structures are developed to
map imagined EEG signals to phonemes. The proposed NES models incorporate a
language model based EEG feature extraction layer, an acoustic feature mapping
layer, and a restricted Boltzmann machine (RBM) based the feature learning
layer. The NES models can jointly realize the representation of multichannel
EEG signals and the projection of acoustic speech signals. Among three proposed
NES models, two augmented networks utilize spoken EEG signals as either bias or
gate information to strengthen the feature learning and translation of imagined
EEG signals. Experimental results show that all three proposed NES models
outperform the baseline support vector machine (SVM) method on EEG-speech
classification. With respect to binary classification, our approach achieves
comparable results relative to deep believe network approach. | computer science |
40,891 | VAST : The Virtual Acoustic Space Traveler Dataset | cs.SD | This paper introduces a new paradigm for sound source lo-calization referred
to as virtual acoustic space traveling (VAST) and presents a first dataset
designed for this purpose. Existing sound source localization methods are
either based on an approximate physical model (physics-driven) or on a
specific-purpose calibration set (data-driven). With VAST, the idea is to learn
a mapping from audio features to desired audio properties using a massive
dataset of simulated room impulse responses. This virtual dataset is designed
to be maximally representative of the potential audio scenes that the
considered system may be evolving in, while remaining reasonably compact. We
show that virtually-learned mappings on this dataset generalize to real data,
overcoming some intrinsic limitations of traditional binaural sound
localization methods based on time differences of arrival. | computer science |
40,892 | Challenging Personalized Video Recommendation | cs.IR | The online videos are generated at an unprecedented speed in recent years. As
a result, how to generate personalized recommendation from the large volume of
videos becomes more and more challenging. In this paper, we propose to extract
the non-textual contents from the videos themselves to enhance the personalized
video recommendation. The change of the content types makes us study three
issues in this paper. The first issue is what non-textual contents are helpful.
Considering the users are attracted by the videos in different aspects,
multiple audio and visual features are extracted, encoded and transformed to
represent the video contents in the recommender system for the first time. The
second issue is how to use the non-textual contents to generate accurate
personalized recommendation. We reproduce the existing methods and find that
they do not perform well with the non-textual contents due to the mismatch
between the features and the learning methods. To address this problem, we
propose a new method in this paper. Our experiments show that the proposed
method is more accurate whether the video content features are non-textual or
textual. | computer science |
40,893 | Classification and Learning-to-rank Approaches for Cross-Device Matching
at CIKM Cup 2016 | cs.IR | In this paper, we propose two methods for tackling the problem of
cross-device matching for online advertising at CIKM Cup 2016. The first method
considers the matching problem as a binary classification task and solve it by
utilizing ensemble learning techniques. The second method defines the matching
problem as a ranking task and effectively solve it with using learning-to-rank
algorithms. The results show that the proposed methods obtain promising
results, in which the ranking-based method outperforms the classification-based
method for the task. | computer science |
40,894 | Distributed Dictionary Learning | math.OC | The paper studies distributed Dictionary Learning (DL) problems where the
learning task is distributed over a multi-agent network with time-varying
(nonsymmetric) connectivity. This formulation is relevant, for instance, in
big-data scenarios where massive amounts of data are collected/stored in
different spatial locations and it is unfeasible to aggregate and/or process
all the data in a fusion center, due to resource limitations, communication
overhead or privacy considerations. We develop a general distributed
algorithmic framework for the (nonconvex) DL problem and establish its
asymptotic convergence. The new method hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a gradient tracking mechanism
instrumental to locally estimate the missing global information; and ii) a
consensus step, as a mechanism to distribute the computations among the agents.
To the best of our knowledge, this is the first distributed algorithm with
provable convergence for the DL problem and, more in general, bi-convex
optimization problems over (time-varying) directed graphs. | computer science |
40,895 | On Coreset Constructions for the Fuzzy $K$-Means Problem | cs.LG | The fuzzy $K$-means problem is a generalization of the $K$-means problem to
soft clusterings. Although popular in practice, the first
$(1+\epsilon)$-approximation algorithms for this problem have been proposed
only recently. In this paper, we pursue the analysis of the fuzzy $K$-means
problem further by making use of coresets. A coreset is an efficient
representative of the input data set that preserves certain properties of the
input, such as the quality of a solution.
In this paper, we propose a novel proof technique for coresets, which makes
use of the notion of weak coresets. Following this approach, we are able to
show that small coresets for the fuzzy $K$-means problem exist. More precisely,
we show that coresets with size poly-logarithmic in the number of input data
points can be constructed in linear time. With these coresets, we are able to
improve the asymptotic runtime of one of the best approximation algorithms for
the fuzzy $K$-means problem. | computer science |
40,896 | Logic-based Clustering and Learning for Time-Series Data | cs.LG | To effectively analyze and design cyberphysical systems (CPS), designers
today have to combat the data deluge problem, i.e., the burden of processing
intractably large amounts of data produced by complex models and experiments.
In this work, we utilize monotonic Parametric Signal Temporal Logic (PSTL) to
design features for unsupervised classification of time series data. This
enables using off-the-shelf machine learning tools to automatically cluster
similar traces with respect to a given PSTL formula. We demonstrate how this
technique produces interpretable formulas that are amenable to analysis and
understanding using a few representative examples. We illustrate this with case
studies related to automotive engine testing, highway traffic analysis, and
auto-grading massively open online courses. | computer science |
40,897 | ASAP: Asynchronous Approximate Data-Parallel Computation | cs.DC | Emerging workloads, such as graph processing and machine learning are
approximate because of the scale of data involved and the stochastic nature of
the underlying algorithms. These algorithms are often distributed over multiple
machines using bulk-synchronous processing (BSP) or other synchronous
processing paradigms such as map-reduce. However, data parallel processing
primitives such as repeated barrier and reduce operations introduce high
synchronization overheads. Hence, many existing data-processing platforms use
asynchrony and staleness to improve data-parallel job performance. Often, these
systems simply change the synchronous communication to asynchronous between the
worker nodes in the cluster. This improves the throughput of data processing
but results in poor accuracy of the final output since different workers may
progress at different speeds and process inconsistent intermediate outputs.
In this paper, we present ASAP, a model that provides asynchronous and
approximate processing semantics for data-parallel computation. ASAP provides
fine-grained worker synchronization using NOTIFY-ACK semantics that allows
independent workers to run asynchronously. ASAP also provides stochastic reduce
that provides approximate but guaranteed convergence to the same result as an
aggregated all-reduce. In our results, we show that ASAP can reduce
synchronization costs and provides 2-10X speedups in convergence and up to 10X
savings in network costs for distributed machine learning applications and
provides strong convergence guarantees. | computer science |
40,898 | Sequence-to-point learning with neural networks for nonintrusive load
monitoring | stat.AP | Energy disaggregation (a.k.a nonintrusive load monitoring, NILM), a
single-channel blind source separation problem, aims to decompose the mains
which records the whole house electricity consumption into appliance-wise
readings. This problem is difficult because it is inherently unidentifiable.
Recent approaches have shown that the identifiability problem could be reduced
by introducing domain knowledge into the model. Deep neural networks have been
shown to be a promising approach for these problems, but sliding windows are
necessary to handle the long sequences which arise in signal processing
problems, which raises issues about how to combine predictions from different
sliding windows. In this paper, we propose sequence-to-point learning, where
the input is a window of the mains and the output is a single point of the
target appliance. We use convolutional neural networks to train the model.
Interestingly, we systematically show that the convolutional neural networks
can inherently learn the signatures of the target appliances, which are
automatically added into the model to reduce the identifiability problem. We
applied the proposed neural network approaches to real-world household energy
data, and show that the methods achieve state-of-the-art performance, improving
two standard error measures by 84% and 92%. | computer science |
40,899 | The on-line shortest path problem under partial monitoring | cs.LG | The on-line shortest path problem is considered under various models of
partial monitoring. Given a weighted directed acyclic graph whose edge weights
can change in an arbitrary (adversarial) way, a decision maker has to choose in
each round of a game a path between two distinguished vertices such that the
loss of the chosen path (defined as the sum of the weights of its composing
edges) be as small as possible. In a setting generalizing the multi-armed
bandit problem, after choosing a path, the decision maker learns only the
weights of those edges that belong to the chosen path. For this problem, an
algorithm is given whose average cumulative loss in n rounds exceeds that of
the best path, matched off-line to the entire sequence of the edge weights, by
a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on
the number of edges of the graph. The algorithm can be implemented with linear
complexity in the number of rounds n and in the number of edges. An extension
to the so-called label efficient setting is also given, in which the decision
maker is informed about the weights of the edges corresponding to the chosen
path at a total of m << n time instances. Another extension is shown where the
decision maker competes against a time-varying path, a generalization of the
problem of tracking the best expert. A version of the multi-armed bandit
setting for shortest path is also discussed where the decision maker learns
only the total weight of the chosen path but not the weights of the individual
edges on the path. Applications to routing in packet switched networks along
with simulation results are also presented. | computer science |
40,900 | A Note on the Inapproximability of Correlation Clustering | cs.LG | We consider inapproximability of the correlation clustering problem defined
as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+"
(similar) or "-" (dissimilar), correlation clustering seeks to partition the
vertices into clusters so that the number of pairs correctly (resp.
incorrectly) classified with respect to the labels is maximized (resp.
minimized). The two complementary problems are called MaxAgree and MinDisagree,
respectively, and have been studied on complete graphs, where every edge is
labeled, and general graphs, where some edge might not have been labeled.
Natural edge-weighted versions of both problems have been studied as well. Let
S-MaxAgree denote the weighted problem where all weights are taken from set S,
we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$
essentially belongs to the same hardness class in the following sense: if there
is a polynomial time algorithm that approximates S-MaxAgree within a factor of
$\lambda = O(\log{|V|})$ with high probability, then for any choice of S',
S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda
+ \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high
probability. A similar statement also holds for $S-MinDisagree. This result
implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree
within a factor of $80/79-\epsilon$, improving upon a previous known factor of
$116/115-\epsilon$ by Charikar et. al. \cite{Chari05}. | computer science |
40,901 | A Tutorial on Spectral Clustering | cs.DS | In recent years, spectral clustering has become one of the most popular
modern clustering algorithms. It is simple to implement, can be solved
efficiently by standard linear algebra software, and very often outperforms
traditional clustering algorithms such as the k-means algorithm. On the first
glance spectral clustering appears slightly mysterious, and it is not obvious
to see why it works at all and what it really does. The goal of this tutorial
is to give some intuition on those questions. We describe different graph
Laplacians and their basic properties, present the most common spectral
clustering algorithms, and derive those algorithms from scratch by several
different approaches. Advantages and disadvantages of the different spectral
clustering algorithms are discussed. | computer science |
40,902 | Faster Rates for training Max-Margin Markov Networks | cs.LG | Structured output prediction is an important machine learning problem both in
theory and practice, and the max-margin Markov network (\mcn) is an effective
approach. All state-of-the-art algorithms for optimizing \mcn\ objectives take
at least $O(1/\epsilon)$ number of iterations to find an $\epsilon$ accurate
solution. Recent results in structured optimization suggest that faster rates
are possible by exploiting the structure of the objective function. Towards
this end \citet{Nesterov05} proposed an excessive gap reduction technique based
on Euclidean projections which converges in $O(1/\sqrt{\epsilon})$ iterations
on strongly convex functions. Unfortunately when applied to \mcn s, this
approach does not admit graphical model factorization which, as in many
existing algorithms, is crucial for keeping the cost per iteration tractable.
In this paper, we present a new excessive gap reduction technique based on
Bregman projections which admits graphical model factorization naturally, and
converges in $O(1/\sqrt{\epsilon})$ iterations. Compared with existing
algorithms, the convergence rate of our method has better dependence on
$\epsilon$ and other parameters of the problem, and can be easily kernelized. | computer science |
40,903 | Evaluation of E-Learners Behaviour using Different Fuzzy Clustering
Models: A Comparative Study | cs.CY | This paper introduces an evaluation methodologies for the e-learners'
behaviour that will be a feedback to the decision makers in e-learning system.
Learner's profile plays a crucial role in the evaluation process to improve the
e-learning process performance. The work focuses on the clustering of the
e-learners based on their behaviour into specific categories that represent the
learner's profiles. The learners' classes named as regular, workers, casual,
bad, and absent. The work may answer the question of how to return bad students
to be regular ones. The work presented the use of different fuzzy clustering
techniques as fuzzy c-means and kernelized fuzzy c-means to find the learners'
categories and predict their profiles. The paper presents the main phases as
data description, preparation, features selection, and the experiments design
using different fuzzy clustering models. Analysis of the obtained results and
comparison with the real world behavior of those learners proved that there is
a match with percentage of 78%. Fuzzy clustering reflects the learners'
behavior more than crisp clustering. Comparison between FCM and KFCM proved
that the KFCM is much better than FCM in predicting the learners' behaviour. | computer science |
40,904 | A Survey of Naïve Bayes Machine Learning approach in Text Document
Classification | cs.LG | Text Document classification aims in associating one or more predefined
categories based on the likelihood suggested by the training set of labeled
documents. Many machine learning algorithms play a vital role in training the
system with predefined categories among which Na\"ive Bayes has some intriguing
facts that it is simple, easy to implement and draws better accuracy in large
datasets in spite of the na\"ive dependence. The importance of Na\"ive Bayes
Machine learning approach has felt hence the study has been taken up for text
document classification and the statistical event models available. This survey
the various feature selection methods has been discussed and compared along
with the metrics related to text document classification. | computer science |
40,905 | Near-Optimal Evasion of Convex-Inducing Classifiers | cs.LG | Classifiers are often used to detect miscreant activities. We study how an
adversary can efficiently query a classifier to elicit information that allows
the adversary to evade detection at near-minimal cost. We generalize results of
Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that
construct undetected instances of near-minimal cost using only polynomially
many queries in the dimension of the space and without reverse engineering the
decision boundary. | computer science |
40,906 | Unbeatable Imitation | cs.GT | We show that for many classes of symmetric two-player games, the simple
decision rule "imitate-the-best" can hardly be beaten by any other decision
rule. We provide necessary and sufficient conditions for imitation to be
unbeatable and show that it can only be beaten by much in games that are of the
rock-scissors-paper variety. Thus, in many interesting examples, like 2x2
games, Cournot duopoly, price competition, rent seeking, public goods games,
common pool resource games, minimum effort coordination games, arms race,
search, bargaining, etc., imitation cannot be beaten by much even by a very
clever opponent. | computer science |
40,907 | Spoken Language Identification Using Hybrid Feature Extraction Methods | cs.SD | This paper introduces and motivates the use of hybrid robust feature
extraction technique for spoken language identification (LID) system. The
speech recognizers use a parametric form of a signal to get the most important
distinguishable features of speech signal for recognition task. In this paper
Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction
coefficients (PLP) along with two hybrid features are used for language
Identification. Two hybrid features, Bark Frequency Cepstral Coefficients
(BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were
obtained from combination of MFCC and PLP. Two different classifiers, Vector
Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model
(GMM) were used for classification. The experiment shows better identification
rate using hybrid feature extraction techniques compared to conventional
feature extraction methods.BFCC has shown better performance than MFCC with
both classifiers. RPLP along with GMM has shown best identification performance
among all feature extraction techniques. | computer science |
40,908 | Wavelet-Based Mel-Frequency Cepstral Coefficients for Speaker
Identification using Hidden Markov Models | cs.SD | To improve the performance of speaker identification systems, an effective
and robust method is proposed to extract speech features, capable of operating
in noisy environment. Based on the time-frequency multi-resolution property of
wavelet transform, the input speech signal is decomposed into various frequency
channels. For capturing the characteristic of the signal, the Mel-Frequency
Cepstral Coefficients (MFCCs) of the wavelet channels are calculated. Hidden
Markov Models (HMMs) were used for the recognition stage as they give better
recognition for the speaker's features than Dynamic Time Warping (DTW).
Comparison of the proposed approach with the MFCCs conventional feature
extraction method shows that the proposed method not only effectively reduces
the influence of noise, but also improves recognition. A recognition rate of
99.3% was obtained using the proposed feature extraction technique compared to
98.7% using the MFCCs. When the test patterns were corrupted by additive white
Gaussian noise with 20 dB S/N ratio, the recognition rate was 97.3% using the
proposed method compared to 93.3% using the MFCCs. | computer science |
40,909 | Additive Non-negative Matrix Factorization for Missing Data | cs.NA | Non-negative matrix factorization (NMF) has previously been shown to be a
useful decomposition for multivariate data. We interpret the factorization in a
new way and use it to generate missing attributes from test data. We provide a
joint optimization scheme for the missing attributes as well as the NMF
factors. We prove the monotonic convergence of our algorithms. We present
classification results for cases with missing attributes. | computer science |
40,910 | Online Algorithms for the Multi-Armed Bandit Problem with Markovian
Rewards | math.OC | We consider the classical multi-armed bandit problem with Markovian rewards.
When played an arm changes its state in a Markovian fashion while it remains
frozen when not played. The player receives a state-dependent reward each time
it plays an arm. The number of states and the state transition probabilities of
an arm are unknown to the player. The player's objective is to maximize its
long-term total reward by learning the best arm over time. We show that under
certain conditions on the state transition probabilities of the arms, a sample
mean based index policy achieves logarithmic regret uniformly over the total
number of trials. The result shows that sample mean based index policies can be
applied to learning problems under the rested Markovian bandit model without
loss of optimality in the order. Moreover, comparision between Anantharam's
index policy and UCB shows that by choosing a small exploration parameter UCB
can have a smaller regret than Anantharam's index policy. | computer science |