title
stringlengths 4
343
| abstract
stringlengths 4
4.48k
|
---|---|
frank-wolfe algorithm for the exact sparse problem | in this paper, we study the properties of the frank-wolfe algorithm to solve the \exactsparse reconstruction problem. we prove that when the dictionary is quasi-incoherent, at each iteration, the frank-wolfe algorithm picks up an atom indexed by the support. we also prove that when the dictionary is quasi-incoherent, there exists an iteration beyond which the algorithm converges exponentially fast. |
computational solutions for bayesian inference in mixture models | this chapter surveys the most standard monte carlo methods available for simulating from a posterior distribution associated with a mixture and conducts some experiments about the robustness of the gibbs sampler in high dimensional gaussian settings. this is a chapter prepared for the forthcoming 'handbook of mixture analysis'. |
comparing spike and slab priors for bayesian variable selection | an important task in building regression models is to decide which regressors should be included in the final model. in a bayesian approach, variable selection can be performed using mixture priors with a spike and a slab component for the effects subject to selection. as the spike is concentrated at zero, variable selection is based on the probability of assigning the corresponding regression effect to the slab component. these posterior inclusion probabilities can be determined by mcmc sampling. in this paper we compare the mcmc implementations for several spike and slab priors with regard to posterior inclusion probabilities and their sampling efficiency for simulated data. further, we investigate posterior inclusion probabilities analytically for different slabs in two simple settings. application of variable selection with spike and slab priors is illustrated on a data set of psychiatric patients where the goal is to identify covariates affecting metabolism. |
limit theorems for filtered long-range dependent random fields | this article investigates general scaling settings and limit distributions of functionals of filtered random fields. the filters are defined by the convolution of non-random kernels with functions of gaussian random fields. the case of long-range dependent fields and increasing observation windows is studied. the obtained limit random processes are non-gaussian. most known results on this topic give asymptotic processes that always exhibit non-negative auto-correlation structures and have the self-similar parameter $h\in(\frac{1}{2},1)$. in this work we also obtain convergence for the case $h\in(0,\frac{1}{2})$ and show how the hurst parameter $h$ can depend on the shape of the observation windows. various examples are presented. |
a new time-varying model for forecasting long-memory series | in this work we propose a new class of long-memory models with time-varying fractional parameter. in particular, the dynamics of the long-memory coefficient, $d$, is specified through a stochastic recurrence equation driven by the score of the predictive likelihood, as suggested by creal et al. (2013) and harvey (2013). we demonstrate the validity of the proposed model by a monte carlo experiment and an application to two real time series. |
evaluating the squared-exponential covariance function in gaussian processes with integral observations | this paper deals with the evaluation of double line integrals of the squared exponential covariance function. we propose a new approach in which the double integral is reduced to a single integral using the error function. this single integral is then computed with efficiently implemented numerical techniques. the performance is compared against existing state of the art methods and the results show superior properties in numerical robustness and accuracy per computation time. |
a novel variational autoencoder with applications to generative modelling, classification, and ordinal regression | we develop a novel probabilistic generative model based on the variational autoencoder approach. notable aspects of our architecture are: a novel way of specifying the latent variables prior, and the introduction of an ordinality enforcing unit. we describe how to do supervised, unsupervised and semi-supervised learning, and nominal and ordinal classification, with the model. we analyze generative properties of the approach, and the classification effectiveness under nominal and ordinal classification, using two benchmark datasets. our results show that our model can achieve comparable results with relevant baselines in both of the classification tasks. |
non-parametric clustering over user features and latent behavioral functions with dual-view mixture models | we present a dual-view mixture model to cluster users based on their features and latent behavioral functions. every component of the mixture model represents a probability density over a feature view for observed user attributes and a behavior view for latent behavioral functions that are indirectly observed through user actions or behaviors. our task is to infer the groups of users as well as their latent behavioral functions. we also propose a non-parametric version based on a dirichlet process to automatically infer the number of clusters. we test the properties and performance of the model on a synthetic dataset that represents the participation of users in the threads of an online forum. experiments show that dual-view models outperform single-view ones when one of the views lacks information. |
deep learning approach in automatic iceberg - ship detection with sar remote sensing data | deep learning is gaining traction with geophysics community to understand subsurface structures, such as fault detection or salt body in seismic data. this study describes using deep learning method for iceberg or ship recognition with synthetic aperture radar (sar) data. drifting icebergs pose a potential threat to activities offshore around the arctic, including for both ship navigation and oil rigs. advancement of satellite imagery using weather-independent cross-polarized radar has enabled us to monitor and delineate icebergs and ships, however a human component is needed to classify the images. here we present transfer learning, a convolutional neural network (cnn) designed to work with a limited training data and features, while demonstrating its effectiveness in this problem. key aspect of the approach is data augmentation and stacking of multiple outputs, resulted in a significant boost in accuracy (logarithmic score of 0.1463). this algorithm has been tested through participation at the statoil/c-core kaggle competition. |
perturbation analysis of learning algorithms: a unifying perspective on generation of adversarial examples | despite the tremendous success of deep neural networks in various learning problems, it has been observed that adding an intentionally designed adversarial perturbation to inputs of these architectures leads to erroneous classification with high confidence in the prediction. in this work, we propose a general framework based on the perturbation analysis of learning algorithms which consists of convex programming and is able to recover many current adversarial attacks as special cases. the framework can be used to propose novel attacks against learning algorithms for classification and regression tasks under various new constraints with closed form solutions in many instances. in particular we derive new attacks against classification algorithms which are shown to achieve comparable performances to notable existing attacks. the framework is then used to generate adversarial perturbations for regression tasks which include single pixel and single subset attacks. by applying this method to autoencoding and image colorization tasks, it is shown that adversarial perturbations can effectively perturb the output of regression tasks as well. |
distill-net: application-specific distillation of deep convolutional neural networks for resource-constrained iot platforms | many internet-of-things (iot) applications demand fast and accurate understanding of a few key events in their surrounding environment. deep convolutional neural networks (cnns) have emerged as an effective approach to understand speech, images, and similar high dimensional data types. algorithmic performance of modern cnns, however, fundamentally relies on learning class-agnostic hierarchical features that only exist in comprehensive training datasets with many classes. as a result, fast inference using cnns trained on such datasets is prohibitive for most resource-constrained iot platforms. to bridge this gap, we present a principled and practical methodology for distilling a complex modern cnn that is trained to effectively recognize many different classes of input data into an application-dependent essential core that not only recognizes the few classes of interest to the application accurately, but also runs efficiently on platforms with limited resources. experimental results confirm that our approach strikes a favorable balance between classification accuracy (application constraint), inference efficiency (platform constraint), and productive development of new applications (business constraint). |
twins: two weighted inconsistency-reduced networks for partial domain adaptation | the task of unsupervised domain adaptation is proposed to transfer the knowledge of a label-rich domain (source domain) to a label-scarce domain (target domain). matching feature distributions between different domains is a widely applied method for the aforementioned task. however, the method does not perform well when classes in the two domains are not identical. specifically, when the classes of the target correspond to a subset of those of the source, target samples can be incorrectly aligned with the classes that exist only in the source. this problem setting is termed as partial domain adaptation (pda). in this study, we propose a novel method called two weighted inconsistency-reduced networks (twins) for pda. we utilize two classification networks to estimate the ratio of the target samples in each class with which a classification loss is weighted to adapt the classes present in the target domain. furthermore, to extract discriminative features for the target, we propose to minimize the divergence between domains measured by the classifiers' inconsistency on target samples. we empirically demonstrate that reducing the inconsistency between two networks is effective for pda and that our method outperforms other existing methods with a large margin in several datasets. |
a residual for outlier identification in zero adjusted regression models | zero adjusted regression models are used to fit variables that are discrete at zero and continuous at some interval of the positive real numbers. diagnostic analysis in these models is usually performed using the randomized quantile residual, which is useful for checking the overall adequacy of a zero adjusted regression model. however, it may fail to identify some outliers. in this work, we introduce a residual for outlier identification in zero adjusted regression models. monte carlo simulation studies and an application suggest that the residual introduced here has good properties and detects outliers that are not identified by the randomized quantile residual. |
an improved deep belief network model for road safety analyses | crash prediction is a critical component of road safety analyses. a widely adopted approach to crash prediction is application of regression based techniques. the underlying calibration process is often time-consuming, requiring significant domain knowledge and expertise and cannot be easily automated. this paper introduces a new machine learning (ml) based approach as an alternative to the traditional techniques. the proposed ml model is called regularized deep belief network, which is a deep neural network with two training steps: it is first trained using an unsupervised learning algorithm and then fine-tuned by initializing a bayesian neural network with the trained weights from the first step. the resulting model is expected to have improved prediction power and reduced need for the time-consuming human intervention. in this paper, we attempt to demonstrate the potential of this new model for crash prediction through two case studies including a collision data set from 800 km stretch of highway 401 and other highways in ontario, canada. our intention is to show the performance of this ml approach in comparison to various traditional models including negative binomial (nb) model, kernel regression (kr), and bayesian neural network (bayesian nn). we also attempt to address other related issues such as effect of training data size and training parameters. |
kriging riemannian data via random domain decompositions | data taking value on a riemannian manifold and observed over a complex spatial domain are becoming more frequent in applications, e.g. in environmental sciences and in geoscience. the analysis of these data needs to rely on local models to account for the non stationarity of the generating random process, the non linearity of the manifold and the complex topology of the domain. in this paper, we propose to use a random domain decomposition approach to estimate an ensemble of local models and then to aggregate the predictions of the local models through fr\'{e}chet averaging. the algorithm is introduced in complete generality and is valid for data belonging to any smooth riemannian manifold but it is then described in details for the case of the manifold of positive definite matrices, the hypersphere and the cholesky manifold. the predictive performance of the method are explored via simulation studies for covariance matrices and correlation matrices, where the cholesky manifold geometry is used. finally, the method is illustrated on an environmental dataset observed over the chesapeake bay (usa). |
domain adaptation for reinforcement learning on the atari | deep reinforcement learning agents have recently been successful across a variety of discrete and continuous control tasks; however, they can be slow to train and require a large number of interactions with the environment to learn a suitable policy. this is borne out by the fact that a reinforcement learning agent has no prior knowledge of the world, no pre-existing data to depend on and so must devote considerable time to exploration. transfer learning can alleviate some of the problems by leveraging learning done on some source task to help learning on some target task. our work presents an algorithm for initialising the hidden feature representation of the target task. we propose a domain adaptation method to transfer state representations and demonstrate transfer across domains, tasks and action spaces. we utilise adversarial domain adaptation ideas combined with an adversarial autoencoder architecture. we align our new policies' representation space with a pre-trained source policy, taking target task data generated from a random policy. we demonstrate that this initialisation step provides significant improvement when learning a new reinforcement learning task, which highlights the wide applicability of adversarial adaptation methods; even as the task and label/action space also changes. |
wavelet screaming: a novel approach to analyzing gwas data | we present an alternative method for genome-wide association studies (gwas) that is more powerful than the regular gwas method for locus detection. the regular gwas method suffers from a substantial multiple-testing burden because of the millions of single nucleotide polymorphisms (snps) being tested simultaneously. furthermore, it does not consider the functional genetic effect on the response variable; i.e., it ignores more complex joint effects of nearby snps within a region. our proposed method screens the entire genome for associations using a sequential sliding-window approach based on wavelets. a sequence of snps represents a genetic signal, and for every screened region, we transform the genetic signal into the wavelet space. we then estimate the proportion of wavelet coefficients associated with the phenotype at different scales. the significance of a region is assessed via simulations, taking advantage of a recent result on bayes factor distributions. our new approach reduces the number of independent tests to be performed. moreover, we show via simulations that the wavelet screaming method provides a substantial gain in power compared to the classic gwas modeling when faced with more complex signals than just single-snp associations. to demonstrate feasibility, we re-analyze data from the large norwegian harvest cohort. keywords: bayes factors, gwas, snp, multiple testing, polygenic |
anisotropic functional deconvolution with long-memory noise: the case of a multi-parameter fractional wiener sheet | we look into the minimax results for the anisotropic two-dimensional functional deconvolution model with the two-parameter fractional gaussian noise. we derive the lower bounds for the $l^p$-risk, $1 \leq p < \infty$, and taking advantage of the riesz poly-potential, we apply a wavelet-vaguelette expansion to de-correlate the anisotropic fractional gaussian noise. we construct an adaptive wavelet hard-thresholding estimator that attains asymptotically quasi-optimal convergence rates in a wide range of besov balls. such convergence rates depend on a delicate balance between the parameters of the besov balls, the degree of ill-posedness of the convolution operator and the parameters of the fractional gaussian noise. a limited simulations study confirms theoretical claims of the paper. the proposed approach is extended to the general $r$-dimensional case, with $r> 2$, and the corresponding convergence rates do not suffer from the curse of dimensionality. |
a factorial mixture prior for compositional deep generative models | we assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties. this paper proposes a factorial mixture prior for capturing latent properties, thereby adding structured compositionality to deep generative models. the prior treats a latent vector as belonging to cartesian product of subspaces, each of which is quantized separately with a gaussian mixture model. some mixture components can be set to represent properties as observed random variables whenever labeled properties are present. through a combination of stochastic variational inference and gradient descent, a method for learning how to infer discrete properties in an unsupervised or semi-supervised way is outlined and empirically evaluated. |
solving the empirical bayes normal means problem with correlated noise | the normal means problem plays a fundamental role in many areas of modern high-dimensional statistics, both in theory and practice. and the empirical bayes (eb) approach to solving this problem has been shown to be highly effective, again both in theory and practice. however, almost all eb treatments of the normal means problem assume that the observations are independent. in practice correlations are ubiquitous in real-world applications, and these correlations can grossly distort eb estimates. here, exploiting theory from schwartzman (2010), we develop new eb methods for solving the normal means problem that take account of unknown correlations among observations. we provide practical software implementations of these methods, and illustrate them in the context of large-scale multiple testing problems and false discovery rate (fdr) control. in realistic numerical experiments our methods compare favorably with other commonly-used multiple testing methods. |
estimating the fundamental frequency using modified newton-raphson algorithm | in this paper, we propose a modified newton-raphson algorithm to estimate the frequency parameter in the fundamental frequency model in presence of an additive stationary error. the proposed estimator is super efficient in nature in the sense that its asymptotic variance is less than the asymptotic variance of the least squares estimator. with a proper step factor modification, the proposed modified newton-raphson algorithm produces an estimator with the rate $o_p(n^{-\frac{3}{2}})$, the same rate as the least squares estimator. numerical experiments are performed for different sample sizes, different error variances and for different models. for illustrative purposes, two real data sets are analyzed using the fundamental frequency model and the estimators are obtained using the proposed algorithm. it is observed the model and the proposed algorithm work quite well in both cases. |
hybrid estimation for ergodic diffusion processes based on noisy discrete observations | we consider parametric estimation for ergodic diffusion processes with noisy sampled data based on the hybrid method, that is, the multi-step estimation with the initial bayes type estimators. in order to select proper initial values for optimisation of the quasi likelihood function of ergodic diffusion processes with noisy observations, we construct the initial bayes type estimator based on the local means of the noisy observations. the asymptotic properties of the initial bayes type estimators and the hybrid multi-step estimators with the initial bayes type estimators are shown, and a concrete example and the simulation results are given. |
class augmented semi-supervised learning for practical clinical analytics on physiological signals | computational analysis on physiological signals would provide immense impact for enabling automated clinical analytics. however, the class imbalance issue where negative or minority class instances are rare in number impairs the robustness of the practical solution. the key idea of our approach is intelligent augmentation of minority class examples to construct smooth, unbiased decision boundary for robust semi-supervised learning. this solves the practical class imbalance problem in anomaly detection task for computational clinical analytics using physiological signals. we choose two critical cardiac marker physiological signals: heart sound or phonocardiogram (pcg) and electrocardiogram (ecg) to validate our claim of robust anomaly detection of clinical events under augmented class learning, where intelligent synthesis of minority class instances attempt to balance the class distribution. we perform extensive experiments on publicly available expert-labelled mit-physionet pcg and ecg datasets that establish high performance merit of the proposed scheme, and our scheme fittingly performs better than the state-of-the-art algorithms. |
direction finding based on multi-step knowledge-aided iterative conjugate gradient algorithms | in this work, we present direction-of-arrival (doa) estimation algorithms based on the krylov subspace that effectively exploit prior knowledge of the signals that impinge on a sensor array. the proposed multi-step knowledge-aided iterative conjugate gradient (cg) (ms-kai-cg) algorithms perform subtraction of the unwanted terms found in the estimated covariance matrix of the sensor data. furthermore, we develop a version of ms-kai-cg equipped with forward-backward averaging, called ms-kai-cg-fb, which is appropriate for scenarios with correlated signals. unlike current knowledge-aided methods, which take advantage of known doas to enhance the estimation of the covariance matrix of the input data, the ms-kai-cg algorithms take advantage of the knowledge of the structure of the forward-backward smoothed covariance matrix and its disturbance terms. simulations with both uncorrelated and correlated signals show that the ms-kai-cg algorithms outperform existing techniques. |
nips - not even wrong? a systematic review of empirically complete demonstrations of algorithmic effectiveness in the machine learning and artificial intelligence literature | objective: to determine the completeness of argumentative steps necessary to conclude effectiveness of an algorithm in a sample of current ml/ai supervised learning literature. data sources: papers published in the neural information processing systems (neurips, n\'ee nips) journal where the official record showed a 2017 year of publication. eligibility criteria: studies reporting a (semi-)supervised model, or pre-processing fused with (semi-)supervised models for tabular data. study appraisal: three reviewers applied the assessment criteria to determine argumentative completeness. the criteria were split into three groups, including: experiments (e.g real and/or synthetic data), baselines (e.g uninformed and/or state-of-art) and quantitative comparison (e.g. performance quantifiers with confidence intervals and formal comparison of the algorithm against baselines). results: of the 121 eligible manuscripts (from the sample of 679 abstracts), 99\% used real-world data and 29\% used synthetic data. 91\% of manuscripts did not report an uninformed baseline and 55\% reported a state-of-art baseline. 32\% reported confidence intervals for performance but none provided references or exposition for how these were calculated. 3\% reported formal comparisons. limitations: the use of one journal as the primary information source may not be representative of all ml/ai literature. however, the neurips conference is recognised to be amongst the top tier concerning ml/ai studies, so it is reasonable to consider its corpus to be representative of high-quality research. conclusion: using the 2017 sample of the neurips supervised learning corpus as an indicator for the quality and trustworthiness of current ml/ai research, it appears that complete argumentative chains in demonstrations of algorithmic effectiveness are rare. |
entropy-constrained training of deep neural networks | we propose a general framework for neural network compression that is motivated by the minimum description length (mdl) principle. for that we first derive an expression for the entropy of a neural network, which measures its complexity explicitly in terms of its bit-size. then, we formalize the problem of neural network compression as an entropy-constrained optimization objective. this objective generalizes many of the compression techniques proposed in the literature, in that pruning or reducing the cardinality of the weight elements of the network can be seen special cases of entropy-minimization techniques. furthermore, we derive a continuous relaxation of the objective, which allows us to minimize it using gradient based optimization techniques. finally, we show that we can reach state-of-the-art compression results on different network architectures and data sets, e.g. achieving x71 compression gains on a vgg-like architecture. |
xor_p a maximally intertwined p-classes problem used as a benchmark with built-in truth for neural networks gradient descent optimization | a natural p-classes generalization of the exclusive or problem, the subtraction modulo p, where p is prime, is presented and solved using a single fully connected hidden layer with p-neurons. although the problem is very simple, the landscape is intricate and challenging and represents an interesting benchmark for gradient descent optimization algorithms. testing 9 optimizers and 9 activation functions up to p = 191, the method converging most often and the fastest to a perfect classification is the adam optimizer combined with the elu activation function. |
generative one-shot learning (gol): a semi-parametric approach to one-shot learning in autonomous vision | highly autonomous driving (had) systems rely on deep neural networks for the visual perception of the driving environment. such networks are trained on large manually annotated databases. in this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. the proposed generative framework, coined generative one-shot learning (gol), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. new synthetic data is generated as pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. gol has been evaluated on environment perception challenges encountered in autonomous vision. |
uniform convergence bounds for codec selection | we frame the problem of selecting an optimal audio encoding scheme as a supervised learning task. through uniform convergence theory, we guarantee approximately optimal codec selection while controlling for selection bias. we present rigorous statistical guarantees for the codec selection problem that hold for arbitrary distributions over audio sequences and for arbitrary quality metrics. our techniques can thus balance sound quality and compression ratio, and use audio samples from the distribution to select a codec that performs well on that particular type of data. the applications of our technique are immense, as it can be used to optimize for quality and bandwidth usage of streaming and other digital media, while significantly outperforming approaches that apply a fixed codec to all data sources. |
deep transfer learning for static malware classification | we propose to apply deep transfer learning from computer vision to static malware classification. in the transfer learning scheme, we borrow knowledge from natural images or objects and apply to the target domain of static malware detection. as a result, training time of deep neural networks is accelerated while high classification performance is still maintained. we demonstrate the effectiveness of our approach on three experiments and show that our proposed method outperforms other classical machine learning methods measured in accuracy, false positive rate, true positive rate and $f_1$ score (in binary classification). we instrument an interpretation component to the algorithm and provide interpretable explanations to enhance security practitioners' trust to the model. we further discuss a convex combination scheme of transfer learning and training from scratch for enhanced malware detection, and provide insights of the algorithmic interpretation of vision-based malware classification techniques. |
gp-cnas: convolutional neural network architecture search with genetic programming | convolutional neural networks (cnns) are effective at solving difficult problems like visual recognition, speech recognition and natural language processing. however, performance gain comes at the cost of laborious trial-and-error in designing deeper cnn architectures. in this paper, a genetic programming (gp) framework for convolutional neural network architecture search, abbreviated as gp-cnas, is proposed to automatically search for optimal cnn architectures. gp-cnas encodes cnns as trees where leaf nodes (gp terminals) are selected residual blocks and non-leaf nodes (gp functions) specify the block assembling procedure. our tree-based representation enables easy design and flexible implementation of genetic operators. specifically, we design a dynamic crossover operator that strikes a balance between exploration and exploitation, which emphasizes cnn complexity at early stage and cnn diversity at later stage. therefore, the desired cnn architecture with balanced depth and width can be found within limited trials. moreover, our gp-cnas framework is highly compatible with other manually-designed and nas-generated block types as well. experimental results on the cifar-10 dataset show that gp-cnas is competitive among the state-of-the-art automatic and semi-automatic nas algorithms. |
a general theory for large-scale curve time series via functional stability measure | modelling a large bundle of curves arises in a broad spectrum of real applications. however, existing literature relies primarily on the critical assumption of independent curve observations. in this paper, we provide a general theory for large-scale gaussian curve time series, where the temporal and cross-sectional dependence across multiple curve observations exist and the number of functional variables, $p,$ may be large relative to the number of observations, $n.$ we propose a novel functional stability measure for multivariate stationary processes based on their spectral properties and use it to establish some useful concentration bounds on the sample covariance matrix function. these concentration bounds serve as a fundamental tool for further theoretical analysis, in particular, for deriving nonasymptotic upper bounds on the errors of the regularized estimates in high dimensional settings. as {\it functional principle component analysis} (fpca) is one of the key techniques to handle functional data, we also investigate the concentration properties of the relevant estimated terms under a fpca framework. to illustrate with an important application, we consider {\it vector functional autoregressive models} and develop a regularization approach to estimate autoregressive coefficient functions under the sparsity constraint. using our derived nonasymptotic results, we investigate the theoretical properties of the regularized estimate in a "large $p,$ small $n$" regime. the finite sample performance of the proposed method is examined through simulation studies. |
universal successor features approximators | the ability of a reinforcement learning (rl) agent to learn about many reward functions at the same time has many potential benefits, such as the decomposition of complex tasks into simpler ones, the exchange of information between tasks, and the reuse of skills. we focus on one aspect in particular, namely the ability to generalise to unseen tasks. parametric generalisation relies on the interpolation power of a function approximator that is given the task description as input; one of its most common form are universal value function approximators (uvfas). another way to generalise to new tasks is to exploit structure in the rl problem itself. generalised policy improvement (gpi) combines solutions of previous tasks into a policy for the unseen task; this relies on instantaneous policy evaluation of old policies under the new reward function, which is made possible through successor features (sfs). our proposed universal successor features approximators (usfas) combine the advantages of all of these, namely the scalability of uvfas, the instant inference of sfs, and the strong generalisation of gpi. we discuss the challenges involved in training a usfa, its generalisation properties and demonstrate its practical benefits and transfer abilities on a large-scale domain in which the agent has to navigate in a first-person perspective three-dimensional environment. |
clustering-oriented representation learning with attractive-repulsive loss | the standard loss function used to train neural network classifiers, categorical cross-entropy (cce), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective. in this work, we propose clustering-oriented representation learning (corel) as an alternative to cce in the context of a generalized attractive-repulsive loss framework. corel has the consequence of building latent representations that collectively exhibit the quality of natural clustering within the latent space of the final hidden layer, according to a predefined similarity function. despite being simple to implement, corel variants outperform or perform equivalently to cce in a variety of scenarios, including image and news article classification using both feed-forward and convolutional neural networks. analysis of the latent spaces created with different similarity functions facilitates insights on the different use cases corel variants can satisfy, where the cosine-corel variant makes a consistently clusterable latent space, while gaussian-corel consistently obtains better classification accuracy than cce. |
deep variational sufficient dimensionality reduction | we consider the problem of sufficient dimensionality reduction (sdr), where the high-dimensional observation is transformed to a low-dimensional sub-space in which the information of the observations regarding the label variable is preserved. we propose dvsdr, a deep variational approach for sufficient dimensionality reduction. the deep structure in our model has a bottleneck that represent the low-dimensional embedding of the data. we explain the sdr problem using graphical models and use the framework of variational autoencoders to maximize the lower bound of the log-likelihood of the joint distribution of the observation and label. we show that such a maximization problem can be interpreted as solving the sdr problem. dvsdr can be easily adopted to semi-supervised learning setting. in our experiment we show that dvsdr performs competitively on classification tasks while being able to generate novel data samples. |
machine learning for molecular dynamics on long timescales | molecular dynamics (md) simulation is widely used to analyze the properties of molecules and materials. most practical applications, such as comparison with experimental measurements, designing drug molecules, or optimizing materials, rely on statistical quantities, which may be prohibitively expensive to compute from direct long-time md simulations. classical machine learning (ml) techniques have already had a profound impact on the field, especially for learning low-dimensional models of the long-time dynamics and for devising more efficient sampling schemes for computing long-time statistics. novel ml methods have the potential to revolutionize long-timescale md and to obtain interpretable models. ml concepts such as statistical estimator theory, end-to-end learning, representation learning and active learning are highly interesting for the md researcher and will help to develop new solutions to hard md problems. with the aim of better connecting the md and ml research areas and spawning new research on this interface, we define the learning problems in long-timescale md, present successful approaches and outline some of the unsolved ml problems in this application field. |
molecular dynamics with neural-network potentials | molecular dynamics simulations are an important tool for describing the evolution of a chemical system with time. however, these simulations are inherently held back either by the prohibitive cost of accurate electronic structure theory computations or the limited accuracy of classical empirical force fields. machine learning techniques can help to overcome these limitations by providing access to potential energies, forces and other molecular properties modeled directly after an electronic structure reference at only a fraction of the original computational cost. the present text discusses several practical aspects of conducting machine learning driven molecular dynamics simulations. first, we study the efficient selection of reference data points on the basis of an active learning inspired adaptive sampling scheme. this is followed by the analysis of a machine-learning based model for simulating molecular dipole moments in the framework of predicting infrared spectra via molecular dynamics simulations. finally, we show that machine learning models can offer valuable aid in understanding chemical systems beyond a simple prediction of quantities. |
wasserstein covariance for multiple random densities | a common feature of methods for analyzing samples of probability density functions is that they respect the geometry inherent to the space of densities. once a metric is specified for this space, the fr\'echet mean is typically used to quantify and visualize the average density from the sample. for one-dimensional densities, the wasserstein metric is popular due to its theoretical appeal and interpretive value as an optimal transport metric, leading to the wasserstein-fr\'echet mean or barycenter as the mean density. we extend the existing methodology for samples of densities in two key directions. first, motivated by applications in neuroimaging, we consider dependent density data, where a $p$-vector of univariate random densities is observed for each sampling unit. second, we introduce a wasserstein covariance measure and propose intuitively appealing estimators for both fixed and diverging $p$, where the latter corresponds to continuously-indexed densities. we also give theory demonstrating consistency and asymptotic normality, while accounting for errors introduced in the unavoidable preparatory density estimation step. the utility of the wasserstein covariance matrix is demonstrated through applications to functional connectivity in the brain using functional magnetic resonance imaging data and to the secular evolution of mortality for various countries. |
cgam: an r package for the constrained generalized additive model | the cgam package contains routines to fit the generalized additive model where the components may be modeled with shape and smoothness assumptions. the main routine is cgam and nineteen symbolic routines are provided to indicate the relationship between the response and each predictor, which satisfies constraints such as monotonicity, convexity, their combinations, tree, and umbrella orderings. the user may specify constrained splines to fit the components for continuous predictors, and various types of orderings for the ordinal predictors. in addition, the user may specify parametrically modeled covariates. the set over which the likelihood is maximized is a polyhedral convex cone, and a least-squares solution is obtained by projecting the data vector onto the cone. for generalized models, the fit is obtained through iteratively re-weighted cone projections. the cone information criterion is provided and may be used to compare fits for combinations of variables and shapes. in addition, the routine wps implements monotone regression in two dimensions using warped-plane splines, without an additivity assumption. the graphical routine plotpersp will plot an estimated mean surface for a selected pair of predictors, given an object of either cgam or wps. this package is now available from the comprehensive r archive network at http://cran.r-project.org/package=cgam. |
a comparison of lstms and attention mechanisms for forecasting financial time series | while lstms show increasingly promising results for forecasting financial time series (fts), this paper seeks to assess if attention mechanisms can further improve performance. the hypothesis is that attention can help prevent long-term dependencies experienced by lstm models. to test this hypothesis, the main contribution of this paper is the implementation of an lstm with attention. both the benchmark lstm and the lstm with attention were compared and both achieved reasonable performances of up to 60% on five stocks from kaggle's two sigma dataset. this comparative analysis demonstrates that an lstm with attention can indeed outperform standalone lstms but further investigation is required as issues do arise with such model architectures. |
a robust estimation for the extended t-process regression model | robust estimation and variable selection procedure are developed for the extended t-process regression model with functional data. statistical properties such as consistency of estimators and predictions are obtained. numerical studies show that the proposed method performs well. |
training on art composition attributes to influence cyclegan art generation | i consider how to influence cyclegan, image-to-image translation, by using additional constraints from a neural network trained on art composition attributes. i show how i trained the the art composition attributes network (acan) by incorporating domain knowledge based on the rules of art evaluation and the result of applying each art composition attribute to apple2orange image translation. |
pathological voice classification using mel-cepstrum vectors and support vector machine | vocal disorders have affected several patients all over the world. due to the inherent difficulty of diagnosing vocal disorders without sophisticated equipment and trained personnel, a number of patients remain undiagnosed. to alleviate the monetary cost of diagnosis, there has been a recent growth in the use of data analysis to accurately detect and diagnose individuals for a fraction of the cost. we propose a cheap, efficient and accurate model to diagnose whether a patient suffers from one of three vocal disorders on the femh 2018 challenge. |
modular meta-learning in abstract graph networks for combinatorial generalization | modular meta-learning is a new framework that generalizes to unseen datasets by combining a small set of neural modules in different ways. in this work we propose abstract graph networks: using graphs as abstractions of a system's subparts without a fixed assignment of nodes to system subparts, for which we would need supervision. we combine this idea with modular meta-learning to get a flexible framework with combinatorial generalization to new tasks built in. we then use it to model the pushing of arbitrarily shaped objects from little or no training data. |
efficient treatment of model discrepancy by gaussian processes - importance for imbalanced multiple constraint inversions | mechanistic simulation models are inverted against observations in order to gain inference on modeled processes. however, with the increasing ability to collect high resolution observations, these observations represent more patterns of detailed processes that are not part of a modeling purpose. this mismatch results in model discrepancies, i.e. systematic differences between observations and model predictions. when discrepancies are not accounted for properly, posterior uncertainty is underestimated. furthermore parameters are inferred so that model discrepancies appear with observation data stream with few records instead of data streams corresponding to the weak model parts. this impedes the identification of weak process formulations that need to be improved. therefore, we developed an efficient formulation to account for model discrepancy by the statistical model of gaussian processes (gp). this paper presents a new bayesian sampling scheme for model parameters and discrepancies, explains the effects of its application on inference by a basic example, and demonstrates applicability to a real world model-data integration study. the gp approach correctly identified model discrepancy in rich data streams. innovations in sampling allowed successful application to observation data streams of several thousand records. moreover, the proposed new formulation could be combined with gradient-based optimization. as a consequence, model inversion studies should acknowledge model discrepancies, especially when using multiple imbalanced data streams. to this end, studies can use the proposed gp approach to improve inference on model parameters and modeled processes. |
unifying topic, sentiment & preference in an hdp-based rating regression model for online reviews | this paper proposes a new hdp based online review rating regression model named topic-sentiment-preference regression analysis (tspra). tspra combines topics (i.e. product aspects), word sentiment and user preference as regression factors, and is able to perform topic clustering, review rating prediction, sentiment analysis and what we invent as "critical aspect" analysis altogether in one framework. tspra extends sentiment approaches by integrating the key concept "user preference" in collaborative filtering (cf) models into consideration, while it is distinct from current cf models by decoupling "user preference" and "sentiment" as independent factors. our experiments conducted on 22 amazon datasets show overwhelming better performance in rating predication against a state-of-art model flame (2015) in terms of error, pearson's correlation and number of inverted pairs. for sentiment analysis, we compare the derived word sentiments against a public sentiment resource senticnet3 and our sentiment estimations clearly make more sense in the context of online reviews. last, as a result of the de-correlation of "user preference" from "sentiment", tspra is able to evaluate a new concept "critical aspects", defined as the product aspects seriously concerned by users but negatively commented in reviews. improvement to such "critical aspects" could be most effective to enhance user experience. |
fast botnet detection from streaming logs using online lanczos method | botnet, a group of coordinated bots, is becoming the main platform of malicious internet activities like ddos, click fraud, web scraping, spam/rumor distribution, etc. this paper focuses on design and experiment of a new approach for botnet detection from streaming web server logs, motivated by its wide applicability, real-time protection capability, ease of use and better security of sensitive data. our algorithm is inspired by a principal component analysis (pca) to capture correlation in data, and we are first to recognize and adapt lanczos method to improve the time complexity of pca-based botnet detection from cubic to sub-cubic, which enables us to more accurately and sensitively detect botnets with sliding time windows rather than fixed time windows. we contribute a generalized online correlation matrix update formula, and a new termination condition for lanczos iteration for our purpose based on error bound and non-decreasing eigenvalues of symmetric matrices. on our dataset of an ecommerce website logs, experiments show the time cost of lanczos method with different time windows are consistently only 20% to 25% of pca. |
fast and accurate 3d medical image segmentation with data-swapping method | deep neural network models used for medical image segmentation are large because they are trained with high-resolution three-dimensional (3d) images. graphics processing units (gpus) are widely used to accelerate the trainings. however, the memory on a gpu is not large enough to train the models. a popular approach to tackling this problem is patch-based method, which divides a large image into small patches and trains the models with these small patches. however, this method would degrade the segmentation quality if a target object spans multiple patches. in this paper, we propose a novel approach for 3d medical image segmentation that utilizes the data-swapping, which swaps out intermediate data from gpu memory to cpu memory to enlarge the effective gpu memory size, for training high-resolution 3d medical images without patching. we carefully tuned parameters in the data-swapping method to obtain the best training performance for 3d u-net, a widely used deep neural network model for medical image segmentation. we applied our tuning to train 3d u-net with full-size images of 192 x 192 x 192 voxels in brain tumor dataset. as a result, communication overhead, which is the most important issue, was reduced by 17.1%. compared with the patch-based method for patches of 128 x 128 x 128 voxels, our training for full-size images achieved improvement on the mean dice score by 4.48% and 5.32 % for detecting whole tumor sub-region and tumor core sub-region, respectively. the total training time was reduced from 164 hours to 47 hours, resulting in 3.53 times of acceleration. |
on environmental contours for marine and coastal design | environmental contours are used in structural reliability analysis of marine and coastal structures as an approximate means to locate the boundary of the distribution of environmental variables, and hence sets of environmental conditions giving rise to extreme structural loads and responses. outline guidance concerning the application of environmental contour methods is given in recent design guidelines from many organisations. however there is lack of clarity concerning the differences between approaches to environmental contour estimation reported in the literature, and regarding the relationship between the environmental contour, corresponding to some return period, and the extreme structural response for the same period. hence there is uncertainty about precisely when environmental contours should be used, and how they should be used well. this article seeks to provide some assistance in understanding the fundamental issues regarding environmental contours and their use in structural reliability analysis. approaches to estimating the joint distribution of environmental variables, and to estimating environmental contours based on that distribution, are described. simple software for estimation of the joint distribution, and hence environmental contours, is illustrated (and is freely available from the authors). extra assumptions required to relate the characteristics of environmental contour to structural failure are outlined. alternative response-based methods not requiring environmental contours are summarised. the results of an informal survey of the metocean user community regarding environmental contours are presented. finally, recommendations about when and how environmental contour methods should be used are made. |
interpretable preference learning: a game theoretic framework for large margin on-line feature and rule learning | a large body of research is currently investigating on the connection between machine learning and game theory. in this work, game theory notions are injected into a preference learning framework. specifically, a preference learning problem is seen as a two-players zero-sum game. an algorithm is proposed to incrementally include new useful features into the hypothesis. this can be particularly important when dealing with a very large number of potential features like, for instance, in relational learning and rule extraction. a game theoretical analysis is used to demonstrate the convergence of the algorithm. furthermore, leveraging on the natural analogy between features and rules, the resulting models can be easily interpreted by humans. an extensive set of experiments on classification tasks shows the effectiveness of the proposed method in terms of interpretability and feature selection quality, with accuracy at the state-of-the-art. |
an empirical evaluation of sketched svd and its application to leverage score ordering | the power of randomized algorithms in numerical methods have led to fast solutions which use the singular value decomposition (svd) as a core routine. however, given the large data size of modern and the modest runtime of svd, most practical algorithms would require some form of approximation, such as sketching, when running svd. while these approximation methods satisfy many theoretical guarantees, we provide the first algorithmic implementations for sketch-and-solve svd problems on real-world, large-scale datasets. we provide a comprehensive empirical evaluation of these algorithms and provide guidelines on how to ensure accurate deployment to real-world data. as an application of sketched svd, we present sketched leverage score ordering, a technique for determining the ordering of data in the training of neural networks. our technique is based on the distributed computation of leverage scores using random projections. these computed leverage scores provide a flexible and efficient method to determine the optimal ordering of training data without manual intervention or annotations. we present empirical results on an extensive set of experiments across image classification, language sentiment analysis, and multi-modal sentiment analysis. our method is faster compared to standard randomized projection algorithms and shows improvements in convergence and results. |
an empirical study of generative models with encoders | generative adversarial networks (gans) are capable of producing high quality image samples. however, unlike variational autoencoders (vaes), gans lack encoders that provide the inverse mapping for the generators, i.e., encode images back to the latent space. in this work, we consider adversarially learned generative models that also have encoders. we evaluate models based on their ability to produce high quality samples and reconstructions of real images. our main contributions are twofold: first, we find that the baseline bidirectional gan (bigan) can be improved upon with the addition of an autoencoder loss, at the expense of an extra hyper-parameter to tune. second, we show that comparable performance to bigan can be obtained by simply training an encoder to invert the generator of a normal gan. |
the negative binomial beta prime regression model with cure rate | this paper introduces a cure rate survival model by assuming that the time to the event of interest follows a beta prime distribution and that the number of competing causes of the event of interest follows a negative binomial distribution. this model provides a novel alternative to the existing cure rate regression models due to its flexibility, as the beta prime model can exhibit greater levels of skewness and kurtosis than those of the gamma and inverse gaussian distributions. moreover, the hazard rate of this model can have an upside-down bathtub or an increasing shape. we approach both parameter estimation and local influence based on likelihood methods. in special, three perturbation schemes are considered for local influence. numerical evaluation of the proposed model is performed by monte carlo simulations. in order to illustrate the potential for practice of our model we apply it to a real data set. |
inference with hamiltonian sequential monte carlo simulators | the paper proposes a new monte-carlo simulator combining the advantages of sequential monte carlo simulators and hamiltonian monte carlo simulators. the result is a method that is robust to multimodality and complex shapes to use for inference in presence of difficult likelihoods or target functions. several examples are provided. |
training deep neural networks with 8-bit floating point numbers | the state-of-the-art hardware platforms for training deep neural networks (dnns) are moving from traditional single precision (32-bit) computations towards 16 bits of precision -- in large part due to the high energy efficiency and smaller bit storage associated with using reduced-precision representations. however, unlike inference, training with numbers represented with less than 16 bits has been challenging due to the need to maintain fidelity of the gradient computations during back-propagation. here we demonstrate, for the first time, the successful training of dnns using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets. in addition to reducing the data and computation precision to 8 bits, we also successfully reduce the arithmetic precision for additions (used in partial product accumulation and weight updates) from 32 bits to 16 bits through the introduction of a number of key ideas including chunk-based accumulation and floating point stochastic rounding. the use of these novel techniques lays the foundation for a new generation of hardware training platforms with the potential for 2-4x improved throughput over today's systems. |
spatial-spectral regularized local scaling cut for dimensionality reduction in hyperspectral image classification | dimensionality reduction (dr) methods have attracted extensive attention to provide discriminative information and reduce the computational burden of the hyperspectral image (hsi) classification. however, the dr methods face many challenges due to limited training samples with high dimensional spectra. to address this issue, a graph-based spatial and spectral regularized local scaling cut (ssrlsc) for dr of hsi data is proposed. the underlying idea of the proposed method is to utilize the information from both the spectral and spatial domains to achieve better classification accuracy than its spectral domain counterpart. in ssrlsc, a guided filter is initially used to smoothen and homogenize the pixels of the hsi data in order to preserve the pixel consistency. this is followed by generation of between-class and within-class dissimilarity matrices in both spectral and spatial domains by regularized local scaling cut (rlsc) and neighboring pixel local scaling cut (nplsc) respectively. finally, we obtain the projection matrix by optimizing the updated spatial-spectral between-class and total-class dissimilarity. the effectiveness of the proposed dr algorithm is illustrated with two popular real-world hsi datasets. |
adam induces implicit weight sparsity in rectifier neural networks | in recent years, deep neural networks (dnns) have been applied to various machine leaning tasks, including image recognition, speech recognition, and machine translation. however, large dnn models are needed to achieve state-of-the-art performance, exceeding the capabilities of edge devices. model reduction is thus needed for practical use. in this paper, we point out that deep learning automatically induces group sparsity of weights, in which all weights connected to an output channel (node) are zero, when training dnns under the following three conditions: (1) rectified-linear-unit (relu) activations, (2) an $l_2$-regularized objective function, and (3) the adam optimizer. next, we analyze this behavior both theoretically and experimentally, and propose a simple model reduction method: eliminate the zero weights after training the dnn. in experiments on mnist and cifar-10 datasets, we demonstrate the sparsity with various training setups. finally, we show that our method can efficiently reduce the model size and performs well relative to methods that use a sparsity-inducing regularizer. |
bayesian parameter estimation of miss-specified models | fitting a simplifying model with several parameters to real data of complex objects is a highly nontrivial task, but enables the possibility to get insights into the objects physics. here, we present a method to infer the parameters of the model, the model error as well as the statistics of the model error. this method relies on the usage of many data sets in a simultaneous analysis in order to overcome the problems caused by the degeneracy between model parameters and model error. errors in the modeling of the measurement instrument can be absorbed in the model error allowing for applications with complex instruments. |
invariance, causality and robustness | we discuss recent work for causal inference and predictive robustness in a unifying way. the key idea relies on a notion of probabilistic invariance or stability: it opens up new insights for formulating causality as a certain risk minimization problem with a corresponding notion of robustness. the invariance itself can be estimated from general heterogeneous or perturbation data which frequently occur with nowadays data collection. the novel methodology is potentially useful in many applications, offering more robustness and better `causal-oriented' interpretation than machine learning or estimation in standard regression or classification frameworks. |
a novel large-scale ordinal regression model | ordinal regression (or) is a special multiclass classification problem where an order relation exists among the labels. recent years, people share their opinions and sentimental judgments conveniently with social networks and e-commerce so that plentiful large-scale or problems arise. however, few studies have focused on this kind of problems. nonparallel support vector ordinal regression (npsvor) is a svm-based or model, which learns a hyperplane for each rank by solving a series of independent sub-optimization problems and then ensembles those learned hyperplanes to predict. the previous studies are focused on its nonlinear case and got a competitive testing performance, but its training is time consuming, particularly for large-scale data. in this paper, we consider npsvor's linear case and design an efficient training method based on the dual coordinate descent method (dcd). to utilize the order information among labels in prediction, a new prediction function is also proposed. extensive contrast experiments on the text or datasets indicate that the carefully implemented dcd is very suitable for training large data. |
multisource and multitemporal data fusion in remote sensing | the sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2d/3d data to 4d data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. there are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. this paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references. |
efficient logic architecture in training gradient boosting decision tree for high-performance and edge computing | this study proposes a logic architecture for the high-speed and power efficiently training of a gradient boosting decision tree model of binary classification. we implemented the proposed logic architecture on an fpga and compared training time and power efficiency with three general gbdt software libraries using cpu and gpu. the training speed of the logic architecture on the fpga was 26-259 times faster than the software libraries. the power efficiency of the logic architecture was 90-1,104 times higher than the software libraries. the results show that the logic architecture suits for high-performance and edge computing. |
neuralwarp: time-series similarity with warping networks | research on time-series similarity measures has emphasized the need for elastic methods which align the indices of pairs of time series and a plethora of non-parametric have been proposed for the task. on the other hand, deep learning approaches are dominant in closely related domains, such as learning image and text sentence similarity. in this paper, we propose \textit{neuralwarp}, a novel measure that models the alignment of time-series indices in a deep representation space, by modeling a warping function as an upper level neural network between deeply-encoded time series values. experimental results demonstrate that \textit{neuralwarp} outperforms both non-parametric and un-warped deep models on a range of diverse real-life datasets. |
stochastic comparisons of the largest claim amounts from two sets of interdependent heterogeneous portfolios | let $ x_{\lambda_1},\ldots,x_{\lambda_n}$ be dependent non-negative random variables and $y_i=i_{p_i} x_{\lambda_i}$, $i=1,\ldots,n$, where $i_{p_1},\ldots,i_{p_n}$ are independent bernoulli random variables independent of $x_{\lambda_i}$'s, with ${\rm e}[i_{p_i}]=p_i$, $i=1,\ldots,n$. in actuarial sciences, $y_i$ corresponds to the claim amount in a portfolio of risks. in this paper, we compare the largest claim amounts of two sets of interdependent portfolios, in the sense of usual stochastic order, when the variables in one set have the parameters $\lambda_1,\ldots,\lambda_n$ and $p_1,\ldots,p_n$ and the variables in the other set have the parameters $\lambda^{*}_1,\ldots,\lambda^{*}_n$ and $p^*_1,\ldots,p^*_n$. for illustration, we apply the results to some important models in actuary. |
on the variance of internode distance under the multispecies coalescent | we consider the problem of estimating species trees from unrooted gene tree topologies in the presence of incomplete lineage sorting, a common phenomenon that creates gene tree heterogeneity in multilocus datasets. one popular class of reconstruction methods in this setting is based on internode distances, i.e. the average graph distance between pairs of species across gene trees. while statistical consistency in the limit of large numbers of loci has been established in some cases, little is known about the sample complexity of such methods. here we make progress on this question by deriving a lower bound on the worst-case variance of internode distance which depends linearly on the corresponding graph distance in the species tree. we also discuss some algorithmic implications. |
feedforward neural network for time series anomaly detection | time series anomaly detection is usually formulated as finding outlier data points relative to some usual data, which is also an important problem in industry and academia. to ensure systems working stably, internet companies, banks and other companies need to monitor time series, which is called kpi (key performance indicators), such as cpu used, number of orders, number of online users and so on. however, millions of time series have several shapes (e.g. seasonal kpis, kpis of timed tasks and kpis of cpu used), so that it is very difficult to use a simple statistical model to detect anomaly for all kinds of time series. although some anomaly detectors have developed many years and some supervised models are also available in this field, we find many methods have their own disadvantages. in this paper, we present our system, which is based on deep feedforward neural network and detect anomaly points of time series. the main difference between our system and other systems based on supervised models is that we do not need feature engineering of time series to train deep feedforward neural network in our system, which is essentially an end-to-end system. |
low-rank interaction with sparse additive effects model for large data frames | many applications of machine learning involve the analysis of large data frames-matrices collecting heterogeneous measurements (binary, numerical, counts, etc.) across samples-with missing values. low-rank models, as studied by udell et al. [30], are popular in this framework for tasks such as visualization, clustering and missing value imputation. yet, available methods with statistical guarantees and efficient optimization do not allow explicit modeling of main additive effects such as row and column, or covariate effects. in this paper, we introduce a low-rank interaction and sparse additive effects (loris) model which combines matrix regression on a dictionary and low-rank design, to estimate main effects and interactions simultaneously. we provide statistical guarantees in the form of upper bounds on the estimation error of both components. then, we introduce a mixed coordinate gradient descent (mcgd) method which provably converges sub-linearly to an optimal solution and is computationally efficient for large scale data sets. we show on simulated and survey data that the method has a clear advantage over current practices, which consist in dealing separately with additive effects in a preprocessing step. |
block clustering of binary data with gaussian co-variables | the simultaneous grouping of rows and columns is an important technique that is increasingly used in large-scale data analysis. in this paper, we present a novel co-clustering method using co-variables in its construction. it is based on a latent block model taking into account the problem of grouping variables and clustering individuals by integrating information given by sets of co-variables. numerical experiments on simulated data sets and an application on real genetic data highlight the interest of this approach. |
a gravity model analysis of irish merchandise goods exports under brexit | we examine the effect of a hard brexit on irish exports using the ppml gravity model and irish exports data at the micro level. we find a hard brexit could reduce irish national income by over 9 billion euro annually and the effect would be sustained mst in th etraditional sectors of agriculture and food production of the irish economy |
a method to facilitate cancer detection and type classification from gene expression data using a deep autoencoder and neural network | with the increased affordability and availability of whole-genome sequencing, large-scale and high-throughput gene expression is widely used to characterize diseases, including cancers. however, establishing specificity in cancer diagnosis using gene expression data continues to pose challenges due to the high dimensionality and complexity of the data. here we present models of deep learning (dl) and apply them to gene expression data for the diagnosis and categorization of cancer. in this study, we have developed two dl models using messenger ribonucleic acid (mrna) datasets available from the genomic data commons repository. our models achieved 98% accuracy in cancer detection, with false negative and false positive rates below 1.7%. in our results, we demonstrated that 18 out of 32 cancer-typing classifications achieved more than 90% accuracy. due to the limitation of a small sample size (less than 50 observations), certain cancers could not achieve a higher accuracy in typing classification, but still achieved high accuracy for the cancer detection task. to validate our models, we compared them with traditional statistical models. the main advantage of our models over traditional cancer detection is the ability to use data from various cancer types to automatically form features to enhance the detection and diagnosis of a specific cancer type. |
robust estimation of causal effects via high-dimensional covariate balancing propensity score | in this paper, we propose a robust method to estimate the average treatment effects in observational studies when the number of potential confounders is possibly much greater than the sample size. we first use a class of penalized m-estimators for the propensity score and outcome models. we then calibrate the initial estimate of the propensity score by balancing a carefully selected subset of covariates that are predictive of the outcome. finally, the estimated propensity score is used to construct the inverse probability weighting estimator. we prove that the proposed estimator, which has the sample boundedness property, is root-n consistent, asymptotically normal, and semiparametrically efficient when the propensity score model is correctly specified and the outcome model is linear in covariates. more importantly, we show that our estimator remains root-n consistent and asymptotically normal so long as either the propensity score model or the outcome model is correctly specified. we provide valid confidence intervals in both cases and further extend these results to the case where the outcome model is a generalized linear model. in simulation studies, we find that the proposed methodology often estimates the average treatment effect more accurately than the existing methods. we also present an empirical application, in which we estimate the average causal effect of college attendance on adulthood political participation. open-source software is available for implementing the proposed methodology. |
generalization error for decision problems | in this entry we review the generalization error for classification and single-stage decision problems. we distinguish three alternative definitions of the generalization error which have, at times, been conflated in the statistics literature and show that these definitions need not be equivalent even asymptotically. because the generalization error is a non-smooth functional of the underlying generative model, standard asymptotic approximations, e.g., the bootstrap or normal approximations, cannot guarantee correct frequentist operating characteristics without modification. we provide simple data-adaptive procedures that can be used to construct asymptotically valid confidence sets for the generalization error. we conclude the entry with a discussion of extensions and related problems. |
heteroscedastic gaussian processes for uncertainty modeling in large-scale crowdsourced traffic data | accurately modeling traffic speeds is a fundamental part of efficient intelligent transportation systems. nowadays, with the widespread deployment of gps-enabled devices, it has become possible to crowdsource the collection of speed information to road users (e.g. through mobile applications or dedicated in-vehicle devices). despite its rather wide spatial coverage, crowdsourced speed data also brings very important challenges, such as the highly variable measurement noise in the data due to a variety of driving behaviors and sample sizes. when not properly accounted for, this noise can severely compromise any application that relies on accurate traffic data. in this article, we propose the use of heteroscedastic gaussian processes (hgp) to model the time-varying uncertainty in large-scale crowdsourced traffic data. furthermore, we develop a hgp conditioned on sample size and traffic regime (src-hgp), which makes use of sample size information (probe vehicles per minute) as well as previous observed speeds, in order to more accurately model the uncertainty in observed speeds. using 6 months of crowdsourced traffic data from copenhagen, we empirically show that the proposed heteroscedastic models produce significantly better predictive distributions when compared to current state-of-the-art methods for both speed imputation and short-term forecasting tasks. |
a bayesian additive model for understanding public transport usage in special events | public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. the problem highly increases when several events happen concurrently. to solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like singapore, london or tokyo. this paper presents a bayesian additive model with gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the web. we develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. using real data from singapore, we show that the presented model outperforms the best baseline model by up to 26% in r2 and also has explanatory power for its individual components. |
a sequential density-based empirical likelihood ratio test for treatment effects | in health-related experiments, treatment effects can be identified using paired data that consist of pre- and post-treatment measurements. in this framework, sequential testing strategies are widely accepted statistical tools in practice. since performances of parametric sequential testing procedures vitally depend on the validity of the parametric assumptions regarding underlying data distributions, we focus on distribution-free mechanisms for sequentially evaluating treatment effects. in fixed sample size designs, the density-based empirical likelihood (dbel) methods provide powerful nonparametric approximations to optimal neyman-pearson type statistics. in this article, we extend the dbel methodology to develop a novel sequential dbel testing procedure for detecting treatment effects based on paired data. the asymptotic consistency of the proposed test is shown. an extensive monte carlo study confirms that the proposed test outperforms the conventional sequential wilcoxon signed-rank test across a variety of alternatives. the excellent applicability of the proposed method is exemplified using the ventilator-associated pneumonia study that evaluates the effect of chlorhexidine gluconate treatment in reducing oral colonization by pathogens in ventilated patients. |
decentralized decision-making over multi-task networks | in important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network. in this work we propose a distributed decision-making algorithm. the agents are assumed to observe data that may be generated by different models. through localized interactions, the agents reach agreement about which model to track and interact with each other in order to enhance the network performance. we investigate the approach for both static and mobile networks. the simulations illustrate the performance of the proposed strategies. |
accounting for selection bias due to death in estimating the effect of wealth shock on cognition for the health and retirement study | the health and retirement study is a longitudinal study of us adults enrolled at age 50 and older. we were interested in investigating the effect of a sudden large decline in wealth on the cognitive score of subjects. our analysis was complicated by the lack of randomization, confounding by indication, and a substantial fraction of the sample and population will die during follow-up leading to some of our outcomes being censored. common methods to handle these problems for example marginal structural models, may not be appropriate because it upweights subjects who are more likely to die to obtain a population that over time resembles that would have been obtained in the absence of death. we propose a refined approach by comparing the treatment effect among subjects who would survive under both sets of treatment regimes being considered. we do so by viewing this as a large missing data problem and impute the survival status and outcomes of the counterfactual. to improve the robustness of our imputation, we used a modified version of the penalized spline of propensity methods in treatment comparisons approach. we found that our proposed method worked well in various simulation scenarios and our data analysis. |
bayesian manifold-constrained-prior model for an experiment to locate xce | we propose an analysis for a novel experiment intended to locate the genetic locus xce (x-chromosome controlling element), which biases the stochastic process of x-inactivation in the mouse. x-inactivation bias is a phenomenon where cells in the embryo randomly choose one parental chromosome to inactivate, but show an average bias towards one parental strain. measurement of allele-specific gene-expression through pyrosequencing was conducted on mouse crosses of an uncharacterized parent with known carriers. our bayesian analysis is suitable for this adaptive experimental design, accounting for the biases and differences in precision among genes. model identifiability is facilitated by priors constrained to a manifold. we show that reparameterized slice-sampling can suitably tackle a general class of constrained priors. we demonstrate a physical model, based upon a "weighted-coin" hypothesis, that predicts x-inactivation ratios in untested crosses. this model suggests that xce alleles differ due to a process known as copy number variation, where stronger xce alleles are shorter sequences. |
multinomial goodness-of-fit based on u-statistics: high-dimensional asymptotic and minimax optimality | we consider multinomial goodness-of-fit tests in the high-dimensional regime where the number of bins increases with the sample size. in this regime, pearson's chi-squared test can suffer from low power due to the substantial bias as well as high variance of its statistic. to resolve these issues, we introduce a family of u-statistic for multinomial goodness-of-fit and study their asymptotic behaviors in high-dimensions. specifically, we establish conditions under which the considered u-statistic is asymptotically poisson or gaussian, and investigate its power function under each asymptotic regime. furthermore, we introduce a class of weights for the u-statistic that results in minimax rate optimal tests. |
primal path algorithm for compositional data analysis | compositional data have two unique characteristics compared to typical multivariate data: the observed values are nonnegative and their summand is exactly one. to reflect these characteristics, a specific regularized regression model with linear constraints is commonly used. however, linear constraints incur additional computational time, which becomes severe in high-dimensional cases. as such, we propose an efficient solution path algorithm for a $l_1$ regularized regression with compositional data. the algorithm is then extended to a classification model with compositional predictors. we also compare its computational speed with that of previously developed algorithms and apply the proposed algorithm to analyze human gut microbiome data. |
the fdr-linking theorem | this paper introduces the \texttt{fdr-linking} theorem, a novel technique for understanding \textit{non-asymptotic} fdr control of the benjamini--hochberg (bh) procedure under arbitrary dependence of the $p$-values. this theorem offers a principled and flexible approach to linking all $p$-values and the null $p$-values from the fdr control perspective, suggesting a profound implication that, to a large extent, the fdr of the bh procedure relies mostly on the null $p$-values. to illustrate the use of this theorem, we propose a new type of dependence only concerning the null $p$-values, which, while strictly \textit{relaxing} the state-of-the-art prds dependence (benjamini and yekutieli, 2001), ensures the fdr of the bh procedure below a level that is independent of the number of hypotheses. this level is, furthermore, shown to be optimal under this new dependence structure. next, we present a concept referred to as \textit{fdr consistency} that is weaker but more amenable than fdr control, and the \texttt{fdr-linking} theorem shows that fdr consistency is completely determined by the joint distribution of the null $p$-values, thereby reducing the analysis of this new concept to the global null case. finally, this theorem is used to obtain a sharp fdr bound under arbitrary dependence, which improves the $\log$-correction fdr bound (benjamini and yekutieli, 2001) in certain regimes. |
non-adversarial image synthesis with generative latent nearest neighbors | unconditional image generation has recently been dominated by generative adversarial networks (gans). gan methods train a generator which regresses images from random noise vectors, as well as a discriminator that attempts to differentiate between the generated images and a training set of real images. gans have shown amazing results at generating realistic looking images. despite their success, gans suffer from critical drawbacks including: unstable training and mode-dropping. the weaknesses in gans have motivated research into alternatives including: variational auto-encoders (vaes), latent embedding learning methods (e.g. glo) and nearest-neighbor based implicit maximum likelihood estimation (imle). unfortunately at the moment, gans still significantly outperform the alternative methods for image generation. in this work, we present a novel method - generative latent nearest neighbors (glann) - for training generative models without adversarial training. glann combines the strengths of imle and glo in a way that overcomes the main drawbacks of each method. consequently, glann generates images that are far better than glo and imle. our method does not suffer from mode collapse which plagues gan training and is much more stable. qualitative results show that glann outperforms a baseline consisting of 800 gans and vaes on commonly used datasets. our models are also shown to be effective for training truly non-adversarial unsupervised image translation. |
stochastic doubly robust gradient | when training a machine learning model with observational data, it is often encountered that some values are systemically missing. learning from the incomplete data in which the missingness depends on some covariates may lead to biased estimation of parameters and even harm the fairness of decision outcome. this paper proposes how to adjust the causal effect of covariates on the missingness when training models using stochastic gradient descent (sgd). inspired by the design of doubly robust estimator and its theoretical property of double robustness, we introduce stochastic doubly robust gradient (sdrg) consisting of two models: weight-corrected gradients for inverse propensity score weighting and per-covariate control variates for regression adjustment. also, we identify the connection between double robustness and variance reduction in sgd by demonstrating the sdrg algorithm with a unifying framework for variance reduced sgd. the performance of our approach is empirically tested by showing the convergence in training image classifiers with several examples of missing data. |
nadpex: an on-policy temporally consistent exploration method for deep reinforcement learning | reinforcement learning agents need exploratory behaviors to escape from local optima. these behaviors may include both immediate dithering perturbation and temporally consistent exploration. to achieve these, a stochastic policy model that is inherently consistent through a period of time is in desire, especially for tasks with either sparse rewards or long term information. in this work, we introduce a novel on-policy temporally consistent exploration strategy - neural adaptive dropout policy exploration (nadpex) - for deep reinforcement learning agents. modeled as a global random variable for conditional distribution, dropout is incorporated to reinforcement learning policies, equipping them with inherent temporal consistency, even when the reward signals are sparse. two factors, gradients' alignment with the objective and kl constraint in policy space, are discussed to guarantee nadpex policy's stable improvement. our experiments demonstrate that nadpex solves tasks with sparse reward while naive exploration and parameter noise fail. it yields as well or even faster convergence in the standard mujoco benchmark for continuous control. |
efficient calculation of the joint distribution of order statistics | we consider the problem of computing the joint distribution of order statistics of stochastically independent random variables in one- and two-group models. while recursive formulas for evaluating the joint cumulative distribution function of such order statistics exist in the literature for a longer time, their numerical implementation remains a challenging task. we tackle this task by presenting novel generalizations of known recursions which we utilize to obtain exact results (calculated in rational arithmetic) as well as faithfully rounded results. finally, some applications in stepwise multiple hypothesis testing are discussed. |
an integrated transfer learning and multitask learning approach for pharmacokinetic parameter prediction | background: pharmacokinetic evaluation is one of the key processes in drug discovery and development. however, current absorption, distribution, metabolism, excretion prediction models still have limited accuracy. aim: this study aims to construct an integrated transfer learning and multitask learning approach for developing quantitative structure-activity relationship models to predict four human pharmacokinetic parameters. methods: a pharmacokinetic dataset included 1104 u.s. fda approved small molecule drugs. the dataset included four human pharmacokinetic parameter subsets (oral bioavailability, plasma protein binding rate, apparent volume of distribution at steady-state and elimination half-life). the pre-trained model was trained on over 30 million bioactivity data. an integrated transfer learning and multitask learning approach was established to enhance the model generalization. results: the pharmacokinetic dataset was split into three parts (60:20:20) for training, validation and test by the improved maximum dissimilarity algorithm with the representative initial set selection algorithm and the weighted distance function. the multitask learning techniques enhanced the model predictive ability. the integrated transfer learning and multitask learning model demonstrated the best accuracies, because deep neural networks have the general feature extraction ability, transfer learning and multitask learning improved the model generalization. conclusions: the integrated transfer learning and multitask learning approach with the improved dataset splitting algorithm was firstly introduced to predict the pharmacokinetic parameters. this method can be further employed in drug discovery and development. |
ecological data analysis based on machine learning algorithms | classification is an important supervised machine learning method, which is necessary and challenging issue for ecological research. it offers a way to classify a dataset into subsets that share common patterns. notably, there are many classification algorithms to choose from, each making certain assumptions about the data and about how classification should be formed. in this paper, we applied eight machine learning classification algorithms such as decision trees, random forest, artificial neural network, support vector machine, linear discriminant analysis, k-nearest neighbors, logistic regression and naive bayes on ecological data. the goal of this study is to compare different machine learning classification algorithms in ecological dataset. in this analysis we have checked the accuracy test among the algorithms. in our study we conclude that linear discriminant analysis and k-nearest neighbors are the best methods among all other methods |
uncertainty evalutation through data modelling for dimensional nanoscale measurements | a major bottleneck in nanoparticle measurements is the lack of comparability. comparability of measurement results is obtained by metrological traceability, which is obtained by calibration. in the present work the calibration of dimensional nanoparticle measurements is performed through the construction of a calibration curve by comparison of measured reference standards to their certified value. subsequently, a general approach is proposed to perform a measurement uncertainty evaluation for a measured quantity when no comprehensive physical model is available, by statistically modelling appropriately selected measurement data. the experimental data is collected so that the influence of relevant parameters can be assessed by fitting a mixed model to the data. furthermore, this model allows to generate a probability density function (pdf) for the concerned measured quantity. applying this methodology to dimensional nanoparticle measurements leads to a pdf for a measured dimensional quantity of the nanoparticles. a pdf for the measurand, which is the certified counterpart of that measured dimensional quantity, can then be extracted by reporting a pdf for the measured dimensional quantity on the calibration curve. the pdf for the measurand grasps its total measurement uncertainty. working in a fully bayesian framework is natural due to the instrinsic caracter of the quantity of interest: the distribution of size rather than the size of one single particle. the developed methodology is applied to the particular case where dimensional nanoparticle measurements are performed using an atomic force microscope (afm). the reference standards used to build a calibration curve are nano-gratings with step heights covering the application range of the calibration curve. |
an evaluation of methods for real-time anomaly detection using force measurements from the turning process | we examined the use of three conventional anomaly detection methods and assess their potential for on-line tool wear monitoring. through efficient data processing and transformation of the algorithm proposed here, in a real-time environment, these methods were tested for fast evaluation of cutting tools on cnc machines. the three-dimensional force data streams we used were extracted from a turning experiment of 21 runs for which a tool was run until it generally satisfied an end-of-life criterion. our real-time anomaly detection algorithm was scored and optimised according to how precisely it can predict the progressive wear of the tool flank. most of our tool wear predictions were accurate and reliable as illustrated in our off-line simulation results. particularly when the multivariate analysis was applied, the algorithm we develop was found to be very robust across different scenarios and against parameter changes. it shall be reasonably easy to apply our approach elsewhere for real-time tool wear analytics. |
learning to navigate the web | learning in environments with large state and action spaces, and sparse rewards, can hinder a reinforcement learning (rl) agent's learning through trial-and-error. for instance, following natural language instructions on the web (such as booking a flight ticket) leads to rl settings where input vocabulary and number of actionable elements on a page can grow very large. even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. we approach the aforementioned problems from a different perspective and propose guided rl approaches that can generate unbounded amount of experience for an agent to learn from. instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with a gradually increasing subset of these relatively easier sub-instructions. in addition, when the expert demonstrations are not available, we propose a novel meta-learning framework that generates new instruction following tasks and trains the agent more effectively. we train dqn, deep reinforcement learning agent, with q-value function approximated with a novel qweb neural network architecture on these smaller, synthetic instructions. we evaluate the ability of our agent to generalize to new instructions on world of bits benchmark, on forms with up to 100 elements, supporting 14 million possible instructions. the qweb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments. |
pan-cancer epigenetic biomarker selection from blood samples using sas | a key focus in current cancer research is the discovery of cancer biomarkers that allow earlier detection with high accuracy and lower costs for both patients and hospitals. blood samples have long been used as a health status indicator, but dna methylation signatures in blood have not been fully appreciated in cancer research. historically, analysis of cancer has been conducted directly with the patient's tumor or related tissues. such analyses allow physicians to diagnose a patient's health and cancer status; however, physicians must observe certain symptoms that prompt them to use biopsies or imaging to verify the diagnosis. this is a post-hoc approach. our study will focus on epigenetic information for cancer detection, specifically information about dna methylation in human peripheral blood samples in cancer discordant monozygotic twin-pairs. this information might be able to help us detect cancer much earlier, before the first symptom appears. several other types of epigenetic data can also be used, but here we demonstrate the potential of blood dna methylation data as a biomarker for pan-cancer using sas 9.3 and sas em. we report that 55 methylation cpg sites measurable in blood samples can be used as biomarkers for early cancer detection and classification. |
automatic cry analysis and classification for infant pain assessment | the effectiveness of pain management relies on the choice and the correct use of suitable pain assessment tools. in the case of newborns, some of the most common tools are human-based and observational, thus affected by subjectivity and methodological problems. therefore, in the last years there has been an increasing interest in developing an automatic machine-based pain assessment tool. this research is a preliminary investigation towards the inclusion of a scoring system for the vocal expression of the infant into an automatic tool. to this aim we present a method to compute three correlated indicators which measure three distress-related features of the cry: duration, dysphonantion and fundamental frequency of the first cry. in particular, we propose a new method to measure the dysphonantion of the cry via spectral entropy analysis, resulting in an indicator that identifies three well separated levels of distress in the vocal expression. these levels provide a classification that is highly correlated with the human-based assessment of the cry. |
multimodal sensor fusion in single thermal image super-resolution | with the fast growth in the visual surveillance and security sectors, thermal infrared images have become increasingly necessary ina large variety of industrial applications. this is true even though ir sensors are still more expensive than their rgb counterpart having the same resolution. in this paper, we propose a deep learning solution to enhance the thermal image resolution. the following results are given:(i) introduction of a multimodal, visual-thermal fusion model that ad-dresses thermal image super-resolution, via integrating high-frequency information from the visual image. (ii) investigation of different net-work architecture schemes in the literature, their up-sampling methods,learning procedures, and their optimization functions by showing their beneficial contribution to the super-resolution problem. (iii) a bench-mark ulb17-vt dataset that contains thermal images and their visual images counterpart is presented. (iv) presentation of a qualitative evaluation of a large test set with 58 samples and 22 raters which shows that our proposed model performs better against state-of-the-arts. |
unsupervised speech recognition via segmental empirical output distribution matching | we consider the problem of training speech recognition systems without using any labeled data, under the assumption that the learner can only access to the input utterances and a phoneme language model estimated from a non-overlapping corpus. we propose a fully unsupervised learning algorithm that alternates between solving two sub-problems: (i) learn a phoneme classifier for a given set of phoneme segmentation boundaries, and (ii) refining the phoneme boundaries based on a given classifier. to solve the first sub-problem, we introduce a novel unsupervised cost function named segmental empirical output distribution matching, which generalizes the work in (liu et al., 2017) to segmental structures. for the second sub-problem, we develop an approximate map approach to refining the boundaries obtained from wang et al. (2017). experimental results on timit dataset demonstrate the success of this fully unsupervised phoneme recognition system, which achieves a phone error rate (per) of 41.6%. although it is still far away from the state-of-the-art supervised systems, we show that with oracle boundaries and matching language model, the per could be improved to 32.5%.this performance approaches the supervised system of the same model architecture, demonstrating the great potential of the proposed method. |
a fuzzy community-based recommender system using pagerank | recommendation systems are widely used by different user service providers specially those who have interactions with the large community of users. this paper introduces a recommender system based on community detection. the recommendation is provided using the local and global similarities between users. the local information is obtained from communities, and the global ones are based on the ratings. here, a new fuzzy community detection using the personalized pagerank metaphor is introduced. the fuzzy membership values of the users to the communities are utilized to define a similarity measure. the method is evaluated by using two well-known datasets: movielens and filmtrust. the results show that our method outperforms recent recommender systems. |
distributed sequential method for analyzing massive data | to analyse a very large data set containing lengthy variables, we adopt a sequential estimation idea and propose a parallel divide-and-conquer method. we conduct several conventional sequential estimation procedures separately, and properly integrate their results while maintaining the desired statistical properties. additionally, using a criterion from the statistical experiment design, we adopt an adaptive sample selection, together with an adaptive shrinkage estimation method, to simultaneously accelerate the estimation procedure and identify the effective variables. we confirm the cogency of our methods through theoretical justifications and numerical results derived from synthesized data sets. we then apply the proposed method to three real data sets, including those pertaining to appliance energy use and particulate matter concentration. |
image embedding of pmu data for deep learning towards transient disturbance classification | this paper presents a study on power grid disturbance classification by deep learning (dl). a real synchrophasor set composing of three different types of disturbance events from the frequency monitoring network (fnet) is used. an image embedding technique called gramian angular field is applied to transform each time series of event data to a two-dimensional image for learning. two main dl algorithms, i.e. cnn (convolutional neural network) and rnn (recurrent neural network) are tested and compared with two widely used data mining tools, the support vector machine and decision tree. the test results demonstrate the superiority of the both dl algorithms over other methods in the application of power system transient disturbance classification. |
bi-clustering for time-varying relational count data analysis | relational count data are often obtained from sources such as simultaneous purchase in online shops and social networking service information. bi-clustering such relational count data reveals the latent structure of the relationship between objects such as household items or people. when relational count data observed at multiple time points are available, it is worthwhile incorporating the time structure into the bi-clustering result to understand how objects move between the cluster over time. in this paper, we propose two bi-clustering methods for analyzing time-varying relational count data. the first model, the dynamic poisson infinite relational model (dpirm), handles time-varying relational count data. in the second model, which we call the dynamic zero-inflated poisson infinite relational model, we further extend the dpirm so that it can handle zero-inflated data. proposing both two models is important as zero-inflated data are often encountered, especially when the time intervals are short. in addition, by explicitly deriving the relevant full conditional distributions, we describe the features of the estimated parameters and, in turn, the relationship between the two models. we show the effectiveness of both models through a simulation study and a real data example. |
differentiable supervector extraction for encoding speaker and phrase information in text dependent speaker verification | in this paper, we propose a new differentiable neural network alignment mechanism for text-dependent speaker verification which uses alignment models to produce a supervector representation of an utterance. unlike previous works with similar approaches, we do not extract the embedding of an utterance from the mean reduction of the temporal dimension. our system replaces the mean by a phrase alignment model to keep the temporal structure of each phrase which is relevant in this application since the phonetic information is part of the identity in the verification task. moreover, we can apply a convolutional neural network as front-end, and thanks to the alignment process being differentiable, we can train the whole network to produce a supervector for each utterance which will be discriminative with respect to the speaker and the phrase simultaneously. as we show, this choice has the advantage that the supervector encodes the phrase and speaker information providing good performance in text-dependent speaker verification tasks. in this work, the process of verification is performed using a basic similarity metric, due to simplicity, compared to other more elaborate models that are commonly used. the new model using alignment to produce supervectors was tested on the rsr2015-part i database for text-dependent speaker verification, providing competitive results compared to similar size networks using the mean to extract embeddings. |
random projection in deep neural networks | this work investigates the ways in which deep learning methods can benefit from random projection (rp), a classic linear dimensionality reduction method. we focus on two areas where, as we have found, employing rp techniques can improve deep models: training neural networks on high-dimensional data and initialization of network parameters. training deep neural networks (dnns) on sparse, high-dimensional data with no exploitable structure implies a network architecture with an input layer that has a huge number of weights, which often makes training infeasible. we show that this problem can be solved by prepending the network with an input layer whose weights are initialized with an rp matrix. we propose several modifications to the network architecture and training regime that makes it possible to efficiently train dnns with learnable rp layer on data with as many as tens of millions of input features and training examples. in comparison to the state-of-the-art methods, neural networks with rp layer achieve competitive performance or improve the results on several extremely high-dimensional real-world datasets. the second area where the application of rp techniques can be beneficial for training deep models is weight initialization. setting the initial weights in dnns to elements of various rp matrices enabled us to train residual deep networks to higher levels of performance. |