title
stringlengths
4
343
abstract
stringlengths
4
4.48k
can i trust you more? model-agnostic hierarchical explanations
interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models. we propose mah\'e, a novel approach to provide model-agnostic hierarchical \'explanations of how powerful machine learning models, such as deep neural networks, capture these interactions as either dependent on or free of the context of data instances. specifically, mah\'e provides context-dependent explanations by a novel local interpretation algorithm that effectively captures any-order interactions, and obtains context-free explanations through generalizing context-dependent interactions to explain global behaviors. experimental results show that mah\'e obtains improved local interaction interpretations over state-of-the-art methods and successfully explains interactions that are context-free.
a tensor-based structural health monitoring approach for aeroservoelastic systems
structural health monitoring is a condition-based field of study utilised to monitor infrastructure, via sensing systems. it is therefore used in the field of aerospace engineering to assist in monitoring the health of aerospace structures. a difficulty however is that in structural health monitoring the data input is usually from sensor arrays, which results in data which are highly redundant and correlated, an area in which traditional two-way matrix approaches have had difficulty in deconstructing and interpreting. newer methods involving tensor analysis allow us to analyse this multi-way structural data in a coherent manner. in our approach, we demonstrate the usefulness of tensor-based learning coupled with for damage detection, on a novel $n$-dof lagrangian aeroservoelastic model.
easicsdeep: a deep learning model for cervical spondylosis identification using surface electromyography signal
cervical spondylosis (cs) is a common chronic disease that affects up to two-thirds of the population and poses a serious burden on individuals and society. the early identification has significant value in improving cure rate and reducing costs. however, the pathology is complex, and the mild symptoms increase the difficulty of the diagnosis, especially in the early stage. besides, the time-consuming and costliness of hospital medical service reduces the attention to the cs identification. thus, a convenient, low-cost intelligent cs identification method is imperious demanded. in this paper, we present an intelligent method based on the deep learning to identify cs, using the surface electromyography (semg) signal. faced with the complex, high dimensionality and weak usability of the semg signal, we proposed and developed a multi-channel easicsdeep algorithm based on the convolutional neural network, which consists of the feature extraction, spatial relationship representation and classification algorithm. to the best of our knowledge, this easicsdeep is the first effort to employ the deep learning and the semg data to identify cs. compared with previous state-of-the-art algorithm, our algorithm achieves a significant improvement.
generalized inverse xgamma distribution: a non-monotone hazard rate model
in this article, a generalized inverse xgamma distribution (gixgd) has been introduced as the generalized version of the inverse xgamma distribution. the proposed model exhibits the pattern of non-monotone hazard rate and belongs to family of positively skewed models. the explicit expressions of some distributional properties, such as, moments, inverse moments, conditional moments, mean deviation, quantile function have been derived. the maximum likelihood estimation procedure has been used to estimate the unknown model parameters as well as survival characteristics of gixgd. the practical applicability of the proposed model has been illustrated through a survival data of guinea pigs.
the fluxcom ensemble of global land-atmosphere energy fluxes
although a key driver of earth's climate system, global land-atmosphere energy fluxes are poorly constrained. here we use machine learning to merge energy flux measurements from fluxnet eddy covariance towers with remote sensing and meteorological data to estimate net radiation, latent and sensible heat and their uncertainties. the resulting fluxcom database comprises 147 global gridded products in two setups: (1) 0.0833${\deg}$ resolution using modis remote sensing data (rs) and (2) 0.5${\deg}$ resolution using remote sensing and meteorological data (rs+meteo). within each setup we use a full factorial design across machine learning methods, forcing datasets and energy balance closure corrections. for rs and rs+meteo setups respectively, we estimate 2001-2013 global (${\pm}$ 1 standard deviation) net radiation as 75.8${\pm}$1.4 ${w\ m^{-2}}$ and 77.6${\pm}$2 ${w\ m^{-2}}$, sensible heat as 33${\pm}$4 ${w\ m^{-2}}$ and 36${\pm}$5 ${w\ m^{-2}}$, and evapotranspiration as 75.6${\pm}$10 ${\times}$ 10$^3$ ${km^3\ yr^{-1}}$ and 76${\pm}$6 ${\times}$ 10$^3$ ${km^3\ yr^{-1}}$. fluxcom products are suitable to quantify global land-atmosphere interactions and benchmark land surface model simulations.
bayesian deep neural networks for low-cost neurophysiological markers of alzheimer's disease severity
as societies around the world are ageing, the number of alzheimer's disease (ad) patients is rapidly increasing. to date, no low-cost, non-invasive biomarkers have been established to advance the objectivization of ad diagnosis and progression assessment. here, we utilize bayesian neural networks to develop a multivariate predictor for ad severity using a wide range of quantitative eeg (qeeg) markers. the bayesian treatment of neural networks both automatically controls model complexity and provides a predictive distribution over the target function, giving uncertainty bounds for our regression task. it is therefore well suited to clinical neuroscience, where data sets are typically sparse and practitioners require a precise assessment of the predictive uncertainty. we use data of one of the largest prospective ad eeg trials ever conducted to demonstrate the potential of bayesian deep learning in this domain, while comparing two distinct bayesian neural network approaches, i.e., monte carlo dropout and hamiltonian monte carlo.
distributed nearest neighbor classification
nearest neighbor is a popular nonparametric method for classification and regression with many appealing properties. in the big data era, the sheer volume and spatial/temporal disparity of big data may prohibit centrally processing and storing the data. this has imposed considerable hurdle for nearest neighbor predictions since the entire training data must be memorized. one effective way to overcome this issue is the distributed learning framework. through majority voting, the distributed nearest neighbor classifier achieves the same rate of convergence as its oracle version in terms of both the regret and instability, up to a multiplicative constant that depends solely on the data dimension. the multiplicative difference can be eliminated by replacing majority voting with the weighted voting scheme. in addition, we provide sharp theoretical upper bounds of the number of subsamples in order for the distributed nearest neighbor classifier to reach the optimal convergence rate. it is interesting to note that the weighted voting scheme allows a larger number of subsamples than the majority voting one. our findings are supported by numerical studies using both simulated and real data sets.
thwarting adversarial examples: an $l_0$-robustsparse fourier transform
we give a new algorithm for approximating the discrete fourier transform of an approximately sparse signal that has been corrupted by worst-case $l_0$ noise, namely a bounded number of coordinates of the signal have been corrupted arbitrarily. our techniques generalize to a wide range of linear transformations that are used in data analysis such as the discrete cosine and sine transforms, the hadamard transform, and their high-dimensional analogs. we use our algorithm to successfully defend against well known $l_0$ adversaries in the setting of image classification. we give experimental results on the jacobian-based saliency map attack (jsma) and the carlini wagner (cw) $l_0$ attack on the mnist and fashion-mnist datasets as well as the adversarial patch on the imagenet dataset.
transfer learning using representation learning in massive open online courses
in a massive open online course (mooc), predictive models of student behavior can support multiple aspects of learning, including instructor feedback and timely intervention. ongoing courses, when the student outcomes are yet unknown, must rely on models trained from the historical data of previously offered courses. it is possible to transfer models, but they often have poor prediction performance. one reason is features that inadequately represent predictive attributes common to both courses. we present an automated transductive transfer learning approach that addresses this issue. it relies on problem-agnostic, temporal organization of the mooc clickstream data, where, for each student, for multiple courses, a set of specific mooc event types is expressed for each time unit. it consists of two alternative transfer methods based on representation learning with auto-encoders: a passive approach using transductive principal component analysis and an active approach that uses a correlation alignment loss term. with these methods, we investigate the transferability of dropout prediction across similar and dissimilar moocs and compare with known methods. results show improved model transferability and suggest that the methods are capable of automatically learning a feature representation that expresses common predictive characteristics of moocs.
effective feature learning with unsupervised learning for improving the predictive models in massive open online courses
the effectiveness of learning in massive open online courses (moocs) can be significantly enhanced by introducing personalized intervention schemes which rely on building predictive models of student learning behaviors such as some engagement or performance indicators. a major challenge that has to be addressed when building such models is to design handcrafted features that are effective for the prediction task at hand. in this paper, we make the first attempt to solve the feature learning problem by taking the unsupervised learning approach to learn a compact representation of the raw features with a large degree of redundancy. specifically, in order to capture the underlying learning patterns in the content domain and the temporal nature of the clickstream data, we train a modified auto-encoder (ae) combined with the long short-term memory (lstm) network to obtain a fixed-length embedding for each input sequence. when compared with the original features, the new features that correspond to the embedding obtained by the modified lstm-ae are not only more parsimonious but also more discriminative for our prediction task. using simple supervised learning models, the learned features can improve the prediction accuracy by up to 17% compared with the supervised neural networks and reduce overfitting to the dominant low-performing group of students, specifically in the task of predicting students' performance. our approach is generic in the sense that it is not restricted to a specific supervised learning model nor a specific prediction task for mooc learning analytics.
recent advances in autoencoder-based representation learning
learning useful representations with little or no supervision is a key challenge in artificial intelligence. we provide an in-depth review of recent advances in representation learning with a focus on autoencoder-based models. to organize these results we make use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features. in particular, we uncover three main mechanisms to enforce such properties, namely (i) regularizing the (approximate or aggregate) posterior distribution, (ii) factorizing the encoding and decoding distribution, or (iii) introducing a structured prior distribution. while there are some promising results, implicit or explicit supervision remains a key enabler and all current methods use strong inductive biases and modeling assumptions. finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.
building computational models to predict one-year mortality in icu patients with acute myocardial infarction and post myocardial infarction syndrome
heart disease remains the leading cause of death in the united states. compared with risk assessment guidelines that require manual calculation of scores, machine learning-based prediction for disease outcomes such as mortality can be utilized to save time and improve prediction accuracy. this study built and evaluated various machine learning models to predict one-year mortality in patients diagnosed with acute myocardial infarction or post myocardial infarction syndrome in the mimic-iii database. the results of the best performing shallow prediction models were compared to a deep feedforward neural network (deep fnn) with back propagation. we included a cohort of 5436 admissions. six datasets were developed and compared. the models applying logistic model trees (lmt) and simple logistic algorithms to the combined dataset resulted in the highest prediction accuracy at 85.12% and the highest auc at .901. in addition, other factors were observed to have an impact on outcomes as well.
on distributed multi-player multiarmed bandit problems in abruptly changing environment
we study the multi-player stochastic multiarmed bandit (mab) problem in an abruptly changing environment. we consider a collision model in which a player receives reward at an arm if it is the only player to select the arm. we design two novel algorithms, namely, round-robin sliding-window upper confidence bound\# (rr-sw-ucb\#), and the sliding-window distributed learning with prioritization (sw-dlp). we rigorously analyze these algorithms and show that the expected cumulative group regret for these algorithms is upper bounded by sublinear functions of time, i.e., the time average of the regret asymptotically converges to zero. we complement our analytic results with numerical illustrations.
association analysis of common and rare snvs using adaptive fisher method to detect dense and sparse signals
the development of next generation sequencing (ngs) technology and genotype imputation methods enabled researchers to measure both common and rare variants in genome-wide association studies (gwas). statistical methods have been proposed to test a set of genomic variants together to detect if any of them is associated with the phenotype or disease. in practice, within the set of variants, there is an unknown proportion of variants truly causal or associated with the disease. because most developed methods are sensitive to either the dense scenario, where a large proportion of the variants are associated, or the sparse scenario, where only a small proportion of the variants are associated, there is a demand of statistical methods with high power in both scenarios. in this paper, we propose a new association test (weighted adaptive fisher, waf) that can adapt to both the dense and sparse scenario by adding weights to the adaptive fisher (af) method we developed before. using both simulation and the genetic analysis workshop 16 (gaw16) data, we have shown that the new method enjoys comparable or better power to popular methods such as sequence kernel association test (skat and skat-o) and adaptive spu (aspu) test.
conditional graph neural processes: a functional autoencoder approach
we introduce a novel encoder-decoder architecture to embed functional processes into latent vector spaces. this embedding can then be decoded to sample the encoded functions over any arbitrary domain. this autoencoder generalizes the recently introduced conditional neural process (cnp) model of random processes. our architecture employs the latest advances in graph neural networks to process irregularly sampled functions. thus, we refer to our model as conditional graph neural process (cgnp). graph neural networks can effectively exploit `local' structures of the metric spaces over which the functions/processes are defined. the contributions of this paper are twofold: (i) a novel graph-based encoder-decoder architecture for functional and process embeddings, and (ii) a demonstration of the importance of using the structure of metric spaces for this type of representations.
tight analyses for non-smooth stochastic gradient descent
consider the problem of minimizing functions that are lipschitz and strongly convex, but not necessarily differentiable. we prove that after $t$ steps of stochastic gradient descent, the error of the final iterate is $o(\log(t)/t)$ with high probability. we also construct a function from this class for which the error of the final iterate of deterministic gradient descent is $\omega(\log(t)/t)$. this shows that the upper bound is tight and that, in this setting, the last iterate of stochastic gradient descent has the same general error rate (with high probability) as deterministic gradient descent. this resolves both open questions posed by shamir (2012). an intermediate step of our analysis proves that the suffix averaging method achieves error $o(1/t)$ with high probability, which is optimal (for any first-order optimization method). this improves results of rakhlin (2012) and hazan and kale (2014), both of which achieved error $o(1/t)$, but only in expectation, and achieved a high probability error bound of $o(\log \log(t)/t)$, which is suboptimal. we prove analogous results for functions that are lipschitz and convex, but not necessarily strongly convex or differentiable. after $t$ steps of stochastic gradient descent, the error of the final iterate is $o(\log(t)/\sqrt{t})$ with high probability, and there exists a function for which the error of the final iterate of deterministic gradient descent is $\omega(\log(t)/\sqrt{t})$.
local probabilistic model for bayesian classification: a generalized local classification model
in bayesian classification, it is important to establish a probabilistic model for each class for likelihood estimation. most of the previous methods modeled the probability distribution in the whole sample space. however, real-world problems are usually too complex to model in the whole sample space; some fundamental assumptions are required to simplify the global model, for example, the class conditional independence assumption for naive bayesian classification. in this paper, with the insight that the distribution in a local sample space should be simpler than that in the whole sample space, a local probabilistic model established for a local region is expected much simpler and can relax the fundamental assumptions that may not be true in the whole sample space. based on these advantages we propose establishing local probabilistic models for bayesian classification. in addition, a bayesian classifier adopting a local probabilistic model can even be viewed as a generalized local classification model; by tuning the size of the local region and the corresponding local model assumption, a fitting model can be established for a particular classification problem. the experimental results on several real-world datasets demonstrate the effectiveness of local probabilistic models for bayesian classification.
next hit predictor - self-exciting risk modeling for predicting next locations of serial crimes
our goal is to predict the location of the next crime in a crime series, based on the identified previous offenses in the series. we build a predictive model called next hit predictor (nhp) that finds the most likely location of the next serial crime via a carefully designed risk model. the risk model follows the paradigm of a self-exciting point process which consists of a background crime risk and triggered risks stimulated by previous offenses in the series. thus, nhp creates a risk map for a crime series at hand. to train the risk model, we formulate a convex learning objective that considers pairwise rankings of locations and use stochastic gradient descent to learn the optimal parameters. next hit predictor incorporates both spatial-temporal features and geographical characteristics of prior crime locations in the series. next hit predictor has demonstrated promising results on decades' worth of serial crime data collected by the crime analysis unit of the cambridge police department in massachusetts, usa.
shortcut matrix product states and its applications
matrix product states (mps), also known as tensor train (tt) decomposition in mathematics, has been proposed originally for describing an (especially one-dimensional) quantum system, and recently has found applications in various applications such as compressing high-dimensional data, supervised kernel linear classifier, and unsupervised generative modeling. however, when applied to systems which are not defined on one-dimensional lattices, a serious drawback of the mps is the exponential decay of the correlations, which limits its power in capturing long-range dependences among variables in the system. to alleviate this problem, we propose to introduce long-range interactions, which act as shortcuts, to mps, resulting in a new model \textit{ shortcut matrix product states} (smps). when chosen properly, the shortcuts can decrease significantly the correlation length of the mps, while preserving the computational efficiency. we develop efficient training methods of smps for various tasks, establish some of their mathematical properties, and show how to find a good location to add shortcuts. finally, using extensive numerical experiments we evaluate its performance in a variety of applications, including function fitting, partition function calculation of $2-$d ising model, and unsupervised generative modeling of handwritten digits, to illustrate its advantages over vanilla matrix product states.
on the differences between l2-boosting and the lasso
we prove that l2-boosting lacks a theoretical property which is central to the behaviour of l1-penalized methods such as basis pursuit and the lasso: whereas l1-penalized methods are guaranteed to recover the sparse parameter vector in a high-dimensional linear model under an appropriate restricted nullspace property, l2-boosting is not guaranteed to do so. hence, l2-boosting behaves quite differently from l1-penalized methods when it comes to parameter recovery/estimation in high-dimensional linear models.
machine learning for anomaly detection and categorization in multi-cloud environments
recently, advances in machine learning techniques have attracted the attention of the research community to build intrusion detection systems (ids) that can detect anomalies in the network traffic. most of the research works, however, do not differentiate among different types of attacks. this is, in fact, necessary for appropriate countermeasures and defense against attacks. in this paper, we investigate both detecting and categorizing anomalies rather than just detecting, which is a common trend in the contemporary research works. we have used a popular publicly available dataset to build and test learning models for both detection and categorization of different attacks. to be precise, we have used two supervised machine learning techniques, namely linear regression (lr) and random forest (rf). we show that even if detection is perfect, categorization can be less accurate due to similarities between attacks. our results demonstrate more than 99% detection accuracy and categorization accuracy of 93.6%, with the inability to categorize some attacks. further, we argue that such categorization can be applied to multi-cloud environments using the same machine learning techniques.
a probabilistic model of the bitcoin blockchain
the bitcoin transaction graph is a public data structure organized as transactions between addresses, each associated with a logical entity. in this work, we introduce a complete probabilistic model of the bitcoin blockchain. we first formulate a set of conditional dependencies induced by the bitcoin protocol at the block level and derive a corresponding fully observed graphical model of a bitcoin block. we then extend the model to include hidden entity attributes such as the functional category of the associated logical agent and derive asymptotic bounds on the privacy properties implied by this model. at the network level, we show evidence of complex transaction-to-transaction behavior and present a relevant discriminative model of the agent categories. performance of both the block-based graphical model and the network-level discriminative model is evaluated on a subset of the public bitcoin blockchain.
gaussian process deep belief networks: a smooth generative model of shape with uncertainty propagation
the shape of an object is an important characteristic for many vision problems such as segmentation, detection and tracking. being independent of appearance, it is possible to generalize to a large range of objects from only small amounts of data. however, shapes represented as silhouette images are challenging to model due to complicated likelihood functions leading to intractable posteriors. in this paper we present a generative model of shapes which provides a low dimensional latent encoding which importantly resides on a smooth manifold with respect to the silhouette images. the proposed model propagates uncertainty in a principled manner allowing it to learn from small amounts of data and providing predictions with associated uncertainty. we provide experiments that show how our proposed model provides favorable quantitative results compared with the state-of-the-art while simultaneously providing a representation that resides on a low-dimensional interpretable manifold.
simultaneous confidence intervals for ranks with application to ranking institutions
when a ranking of institutions such as medical centers or universities is based on an indicator provided with a standard error, confidence intervals should be calculated to assess the quality of these ranks. we consider the problem of constructing simultaneous confidence intervals for the ranks of means based on an observed sample. for this aim, the only available method from the literature uses monte-carlo simulations and is highly anticonservative especially when the means are close to each other or have ties. we present a novel method based on tukey's honest significant difference test (hsd). our new method is on the contrary conservative when there are no ties. by properly rescaling these two methods to the nominal confidence level, they surprisingly perform very similarly. the monte-carlo method is however unscalable when the number of institutions is large than 30 to 50 and stays thus anticonservative. we provide extensive simulations to support our claims and the two methods are compared in terms of their simultaneous coverage and their efficiency. we provide a data analysis for 64 hospitals in the netherlands and compare both methods. software for our new methods is available online in package icranks downloadable from cran. supplementary materials include supplementary r code for the simulations and proofs of the propositions presented in this paper.
high dimensional inference for the structural health monitoring of lock gates
locks and dams are critical pieces of inland waterways. however, many components of existing locks have been in operation past their designed lifetime. to ensure safe and cost effective operations, it is therefore important to monitor the structural health of locks. to support lock gate monitoring, this work considers a high dimensional bayesian inference problem that combines noisy real time strain observations with a detailed finite element model. to solve this problem, we develop a new technique that combines karhunen-lo\`eve decompositions, stochastic differential equation representations of gaussian processes, and kalman smoothing that scales linearly with the number of observations and could be used for near real-time monitoring. we use quasi-periodic gaussian processes to model thermal influences on the strain and infer spatially distributed boundary conditions in the model, which are also characterized with gaussian process prior distributions. the power of this approach is demonstrated on a small synthetic example and then with real observations of mississippi river lock 27, which is located near st. louis, mo usa. the results show that our approach is able to probabilistically characterize the posterior distribution over nearly 1.4 million parameters in under an hour on a standard desktop computer.
stochastic image deformation in frequency domain and parameter estimation using moment evolutions
modelling deformation of anatomical objects observed in medical images can help describe disease progression patterns and variations in anatomy across populations. we apply a stochastic generalisation of the large deformation diffeomorphic metric mapping (lddmm) framework to model differences in the evolution of anatomical objects detected in populations of image data. the computational challenges that are prevalent even in the deterministic lddmm setting are handled by extending the flash lddmm representation to the stochastic setting keeping a finite discretisation of the infinite dimensional space of image deformations. in this computationally efficient setting, we perform estimation to infer parameters for noise correlations and local variability in datasets of images. fundamental for the optimisation procedure is using the finite dimensional fourier representation to derive approximations of the evolution of moments for the stochastic warps. particularly, the first moment allows us to infer deformation mean trajectories. the second moment encodes variation around the mean, and thus provides information on the noise correlation. we show on simulated datasets of 2d mr brain images that the estimation algorithm can successfully recover parameters of the stochastic model.
optimal designs for series estimation in nonparametric regression with correlated data
in this paper we investigate the problem of designing experiments for series estimators in nonparametric regression models with correlated observations. we use projection based estimators to derive an explicit solution of the best linear oracle estimator in the continuous time model for all markovian-type error processes. these solutions are then used to construct estimators, which can be calculated from the available data along with their corresponding optimal design points. our results are illustrated by means of a simulation study, which demonstrates that the new series estimator has a better performance than the commonly used techniques based on the optimal linear unbiased estimators. moreover, we show that the performance of the estimators proposed in this paper can be further improved by choosing the design points appropriately.
kalman-based spectro-temporal ecg analysis using deep convolutional networks for atrial fibrillation detection
in this article, we propose a novel ecg classification framework for atrial fibrillation (af) detection using spectro-temporal representation (i.e., time varying spectrum) and deep convolutional networks. in the first step we use a bayesian spectro-temporal representation based on the estimation of time-varying coefficients of fourier series using kalman filter and smoother. next, we derive an alternative model based on a stochastic oscillator differential equation to accelerate the estimation of the spectro-temporal representation in lengthy signals. finally, after comparative evaluations of different convolutional architectures, we propose an efficient deep convolutional neural network to classify the 2d spectro-temporal ecg data. the ecg spectro-temporal data are classified into four different classes: af, non-af normal rhythm (normal), non-af abnormal rhythm (other), and noisy segments (noisy). the performance of the proposed methods is evaluated and scored with the physionet/computing in cardiology (cinc) 2017 dataset. the experimental results show that the proposed method achieves the overall f1 score of 80.2%, which is in line with the state-of-the-art algorithms.
a probe towards understanding gan and vae models
this project report compares some known gan and vae models proposed prior to 2017. there has been significant progress after we finished this report. we upload this report as an introduction to generative models and provide some personal interpretations supported by empirical evidence. both generative adversarial network models and variational autoencoders have been widely used to approximate probability distributions of data sets. although they both use parametrized distributions to approximate the underlying data distribution, whose exact inference is intractable, their behaviors are very different. we summarize our experiment results that compare these two categories of models in terms of fidelity and mode collapse. we provide a hypothesis to explain their different behaviors and propose a new model based on this hypothesis. we further tested our proposed model on mnist dataset and celeba dataset.
bayesian sparsification of gated recurrent neural networks
bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons. we apply and further develop this approach for gated recurrent architectures. specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in lstm. it makes some gates and information flow components constant, speeds up forward pass and improves compression. moreover, the resulting structure of gate sparsity is interpretable and depends on the task. code is available on github: https://github.com/tipt0p/sparsebayesianrnn
higher moment estimation for elliptically-distributed data: is it necessary to use a sledgehammer to crack an egg?
multivariate elliptically-contoured distributions are widely used for modeling economic and financial data. we study the problem of estimating moment parameters of a semi-parametric elliptical model in a high-dimensional setting. such estimators are useful for financial data analysis and quadratic discriminant analysis. for low-dimensional elliptical models, efficient moment estimators can be obtained by plugging in an estimate of the precision matrix. natural generalizations of the plug-in estimator to high-dimensional settings perform unsatisfactorily, due to estimating a large precision matrix. do we really need a sledgehammer to crack an egg? fortunately, we discover that moment parameters can be efficiently estimated without estimating the precision matrix in high-dimension. we propose a marginal aggregation estimator (mae) for moment parameters. the mae only requires estimating the diagonal of covariance matrix and is convenient to implement. with mild sparsity on the covariance structure, we prove that the asymptotic variance of mae is the same as the ideal plug-in estimator which knows the true precision matrix, so mae is asymptotically efficient. we also extend mae to a block-wise aggregation estimator (bae) when estimates of diagonal blocks of covariance matrix are available. the performance of our methods is validated by extensive simulations and an application to financial returns.
effectiveness of hierarchical softmax in large scale classification tasks
typically, softmax is used in the final layer of a neural network to get a probability distribution for output classes. but the main problem with softmax is that it is computationally expensive for large scale data sets with large number of possible outputs. to approximate class probability efficiently on such large scale data sets we can use hierarchical softmax. lshtc datasets were used to study the performance of the hierarchical softmax. lshtc datasets have large number of categories. in this paper we evaluate and report the performance of normal softmax vs hierarchical softmax on lshtc datasets. this evaluation used macro f1 score as a performance measure. the observation was that the performance of hierarchical softmax degrades as the number of classes increase.
on stacked denoising autoencoder based pre-training of ann for isolated handwritten bengali numerals dataset recognition
this work attempts to find the most optimal parameter setting of a deep artificial neural network (ann) for bengali digit dataset by pre-training it using stacked denoising autoencoder (sda). although sda based recognition is hugely popular in image, speech and language processing related tasks among the researchers, it was never tried in bengali dataset recognition. for this work, a dataset of 70000 handwritten samples were used from (chowdhury and rahman, 2016) and was recognized using several settings of network architecture. among all these settings, the most optimal setting being found to be five or more deeper hidden layers with sigmoid activation and one output layer with softmax activation. we proposed the optimal number of neurons that can be used in the hidden layer is 1500 or more. the minimum validation error found from this work is 2.34% which is the lowest error rate on handwritten bengali dataset proposed till date.
making sense of random forest probabilities: a kernel perspective
a random forest is a popular tool for estimating probabilities in machine learning classification tasks. however, the means by which this is accomplished is unprincipled: one simply counts the fraction of trees in a forest that vote for a certain class. in this paper, we forge a connection between random forests and kernel regression. this places random forest probability estimation on more sound statistical footing. as part of our investigation, we develop a model for the proximity kernel and relate it to the geometry and sparsity of the estimation problem. we also provide intuition and recommendations for tuning a random forest to improve its probability estimates.
rethinking layer-wise feature amounts in convolutional neural network architectures
we characterize convolutional neural networks with respect to the relative amount of features per layer. using a skew normal distribution as a parametrized framework, we investigate the common assumption of monotonously increasing feature-counts with higher layers of architecture designs. our evaluation on models with vgg-type layers on the mnist, fashion-mnist and cifar-10 image classification benchmarks provides evidence that motivates rethinking of our common assumption: architectures that favor larger early layers seem to yield better accuracy.
dateline: deep plackett-luce model with uncertainty measurements
the aggregation of k-ary preferences is a historical and important problem, since it has many real-world applications, such as peer grading, presidential elections and restaurant ranking. meanwhile, variants of plackett-luce model has been applied to aggregate k-ary preferences. however, there are two urgent issues still existing in the current variants. first, most of them ignore feature information. namely, they consider k-ary preferences instead of instance-dependent k-ary preferences. second, these variants barely consider the uncertainty in k-ary preferences provided by agnostic crowds. in this paper, we propose deep plackett-luce model with uncertainty measurements (dateline), which can address both issues simultaneously. to address the first issue, we employ deep neural networks mapping each instance into its ranking score in plackett-luce model. then, we present a weighted plackett-luce model to solve the second issue, where the weight is a dynamic uncertainty vector measuring the worker quality. more importantly, we provide theoretical guarantees for dateline to justify its robustness.
detecting faltering growth in children via minimum random slopes
a child is considered to have faltered growth when increases in their height or weight starts to decline relative to a suitable comparison population. however, there is currently a lack of consensus on both the choice of anthropometric indexes for characterizing growth over time and the operational definition of faltering. cole's classic conditional standard deviation scores is a popular metric but can be problematic, since it only utilizes two data points and relies on having complete data. in the existing literature, arbitrary thresholds are often used to define faltering, which may not be appropriate for all populations. in this article, we propose to assess faltering via minimum random slopes (mrs) derived from a piecewise linear mixed model. when used in conjunction with mixture model-based classification, mrs provides a viable method for identifying children that have faltered, without being dependent upon arbitrary standards. we illustrate our work via a simulation study and apply it to a case study based on a birth cohort within the healthy birth, growth and development knowledge integration (hbgdki) project funded by the bill and melinda gates foundation.
automatic differentiation in mixture models
in this article, we discuss two specific classes of models - gaussian mixture copula models and mixture of factor analyzers - and the advantages of doing inference with gradient descent using automatic differentiation. gaussian mixture models are a popular class of clustering methods, that offers a principled statistical approach to clustering. however, the underlying assumption, that every mixing component is normally distributed, can often be too rigid for several real life datasets. in order to to relax the assumption about the normality of mixing components, a new class of parametric mixture models that are based on copula functions - gaussian mixuture copula models were introduced. estimating the parameters of the proposed gaussian mixture copula model (gmcm) through maximum likelihood has been intractable due to the positive semi-positive-definite constraints on the variance-covariance matrices. previous attempts were limited to maximizing a proxy-likelihood which can be maximized using em algorithm. these existing methods, even though easier to implement, does not guarantee any convergence nor monotonic increase of the gmcm likelihood. in this paper, we use automatic differentiation tools to maximize the exact likelihood of gmcm, at the same time avoiding any constraint equations or lagrange multipliers. we show how our method leads a monotonic increase in likelihood and converges to a (local) optimum value of likelihood. in this paper, we also show how automatic differentiation can be used for inference with mixture of factor analyzers and advantages of doing so. we also discuss how this method also has all the properties such as monotonic increase in likelihood and convergence to a local optimum. note that our work is also applicable to special cases of these two models - for e.g. simple copula models, factor analyzer model, etc.
context-encoding variational autoencoder for unsupervised anomaly detection
unsupervised learning can leverage large-scale data sources without the need for annotations. in this context, deep learning-based auto encoders have shown great potential in detecting anomalies in medical images. however, state-of-the-art anomaly scores are still based on the reconstruction error, which lacks in two essential parts: it ignores the model-internal representation employed for reconstruction, and it lacks formal assertions and comparability between samples. we address these shortcomings by proposing the context-encoding variational autoencoder (cevae) which combines reconstruction- with density-based anomaly scoring. this improves the sample- as well as pixel-wise results. in our experiments on the brats-2017 and isles-2015 segmentation benchmarks, the cevae achieves unsupervised roc-aucs of 0.95 and 0.89, respectively, thus outperforming state-of-the-art methods by a considerable margin.
why relu units sometimes die: analysis of single-unit error backpropagation in neural networks
recently, neural networks in machine learning use rectified linear units (relus) in early processing layers for better performance. training these structures sometimes results in "dying relu units" with near-zero outputs. we first explore this condition via simulation using the cifar-10 dataset and variants of two popular convolutive neural network architectures. our explorations show that the output activation probability pr[y>0] is generally less than 0.5 at system convergence for layers that do not employ skip connections, and this activation probability tends to decrease as one progresses from input layer to output layer. employing a simplified model of a single relu unit trained by a variant of error backpropagation, we then perform a statistical convergence analysis to explore the model's evolutionary behavior. our analysis describes the potentially-slower convergence speeds of dying relu units, and this issue can occur regardless of how the weights are initialized.
inferring the size of the causal universe: features and fusion of causal attribution networks
cause-and-effect reasoning, the attribution of effects to causes, is one of the most powerful and unique skills humans possess. multiple surveys are mapping out causal attributions as networks, but it is unclear how well these efforts can be combined. further, the total size of the collective causal attribution network held by humans is currently unknown, making it challenging to assess the progress of these surveys. here we study three causal attribution networks to determine how well they can be combined into a single network. combining these networks requires dealing with ambiguous nodes, as nodes represent written descriptions of causes and effects and different descriptions may exist for the same concept. we introduce netfuses, a method for combining networks with ambiguous nodes. crucially, treating the different causal attributions networks as independent samples allows us to use their overlap to estimate the total size of the collective causal attribution network. we find that existing surveys capture 5.77% $\pm$ 0.781% of the $\approx$293 000 causes and effects estimated to exist, and 0.198% $\pm$ 0.174% of the $\approx$10 200 000 attributed cause-effect relationships.
conditional bias reduction can be dangerous: a key example from sequential analysis
we present a key example from sequential analysis, which illustrates that conditional bias reduction can cause infinite mean absolute error.
transfer learning to model inertial confinement fusion experiments
inertial confinement fusion (icf) experiments are designed using computer simulations that are approximations of reality, and therefore must be calibrated to accurately predict experimental observations. in this work, we propose a novel nonlinear technique for calibrating from simulations to experiments, or from low fidelity simulations to high fidelity simulations, via "transfer learning". transfer learning is a commonly used technique in the machine learning community, in which models trained on one task are partially retrained to solve a separate, but related task, for which there is a limited quantity of data. we introduce the idea of hierarchical transfer learning, in which neural networks trained on low fidelity models are calibrated to high fidelity models, then to experimental data. this technique essentially bootstraps the calibration process, enabling the creation of models which predict high fidelity simulations or experiments with minimal computational cost. we apply this technique to a database of icf simulations and experiments carried out at the omega laser facility. transfer learning with deep neural networks enables the creation of models that are more predictive of omega experiments than simulations alone. the calibrated models accurately predict future omega experiments, and are used to search for new, optimal implosion designs.
non-factorised variational inference in dynamical systems
we focus on variational inference in dynamical systems where the discrete time transition function (or evolution rule) is modelled by a gaussian process. the dominant approach so far has been to use a factorised posterior distribution, decoupling the transition function from the system states. this is not exact in general and can lead to an overconfident posterior over the transition function as well as an overestimation of the intrinsic stochasticity of the system (process noise). we propose a new method that addresses these issues and incurs no additional computational costs.
stochastic comparisons between the extreme claim amounts from two heterogeneous portfolios in the case of transmuted-g model
let $x_{\lambda_1}, \ldots , x_{\lambda_n}$ be independent non-negative random variables belong to the transmuted-g model and let $y_i=i_{p_i} x_{\lambda_i}$, $i=1,\ldots,n$, where $i_{p_1}, \ldots, i_{p_n}$ are independent bernoulli random variables independent of $x_{\lambda_i}$'s, with ${\rm e}[i_{p_i}]=p_i$, $i=1,\ldots,n$. in actuarial sciences, $y_i$ corresponds to the claim amount in a portfolio of risks. in this paper we compare the smallest and the largest claim amounts of two sets of independent portfolios belonging to the transmuted-g model, in the sense of usual stochastic order, hazard rate order and dispersive order, when the variables in one set have the parameters $\lambda_1,\ldots,\lambda_n$ and the variables in the other set have the parameters $\lambda^{*}_1,\ldots,\lambda^{*}_n$. for illustration we apply the results to the transmuted-g exponential and the transmuted-g weibull models.
coupled representation learning for domains, intents and slots in spoken language understanding
representation learning is an essential problem in a wide range of applications and it is important for performing downstream tasks successfully. in this paper, we propose a new model that learns coupled representations of domains, intents, and slots by taking advantage of their hierarchical dependency in a spoken language understanding system. our proposed model learns the vector representation of intents based on the slots tied to these intents by aggregating the representations of the slots. similarly, the vector representation of a domain is learned by aggregating the representations of the intents tied to a specific domain. to the best of our knowledge, it is the first approach to jointly learning the representations of domains, intents, and slots using their hierarchical relationships. the experimental results demonstrate the effectiveness of the representations learned by our model, as evidenced by improved performance on the contextual cross-domain reranking task.
a general bayesian approach to meet different inferential goals in poverty research for small areas
poverty mapping that displays spatial distribution of various poverty indices is most useful to policymakers and researchers when they are disaggregated into small geographic units, such as cities, municipalities or other administrative partitions of a country. typically, national household surveys that contain welfare variables such as income and expenditures provide limited or no data for small areas. it is well-known that while direct survey-weighted estimates are quite reliable for national or large geographical areas they are unreliable for small geographic areas. if the objective is to find areas with extreme poverty, these direct estimates will often select small areas due to the high variabilities in the estimates. empirical best prediction and bayesian methods have been proposed to improve on the direct point estimates. however, these estimates are not appropriate for different inferential purposes. for example, for identifying areas with extreme poverty, these estimates would often select areas with large sample sizes. in this paper, using databases used by the chilean ministry for their small area estimation production, we illustrate how appropriate bayesian methodology can be developed to address different inferential problems.
bias mitigation post-processing for individual and group fairness
whereas previous post-processing approaches for increasing the fairness of predictions of biased classifiers address only group fairness, we propose a method for increasing both individual and group fairness. our novel framework includes an individual bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. we show superior performance to previous work in the combination of classification accuracy, individual fairness and group fairness on several real-world datasets in applications such as credit, employment, and criminal justice.
a claim score for dynamic claim counts modeling
we develop a claim score based on the bonus-malus approach proposed by [7]. we compare the fit and predictive ability of this new model with various models for of panel count data. in particular, we study in more details a new dynamic model based on the harvey-fernand\`es (hf) approach, which gives different weight to the claims according to their date of occurrence. we show that the hf model has serious shortcomings that limit its use in practice. in contrast, the bonus-malus model does not have these defects. instead, it has several interesting properties: interpretability, computational advantages and ease of use in practice. we believe that the flexibility of this new model means that it could be used in many other actuarial contexts. based on a real database, we show that the proposed model generates the best fit and one of the best predictive capabilities among the other models tested.
few-shot classification in named entity recognition task
for many natural language processing (nlp) tasks the amount of annotated data is limited. this urges a need to apply semi-supervised learning techniques, such as transfer learning or meta-learning. in this work we tackle named entity recognition (ner) task using prototypical network - a metric learning technique. it learns intermediate representations of words which cluster well into named entity classes. this property of the model allows classifying words with extremely limited number of training examples, and can potentially be used as a zero-shot learning method. by coupling this technique with transfer learning we achieve well-performing classifiers trained on only 20 instances of a target class.
an empirical model of large-batch training
in an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. however the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in imagenet to batches of millions in rl agents that play the game dota 2. to our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. in this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (mnist, svhn, cifar-10, imagenet, billion word), reinforcement learning domains (atari and dota), and even generative model training (autoencoders on svhn). we find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training.
ordering the smallest claim amounts from two sets of interdependent heterogeneous portfolios
let $ x_{\lambda_1},\ldots,x_{\lambda_n}$ be a set of dependent and non-negative random variables share a survival copula and let $y_i= i_{p_i}x_{\lambda_i}$, $i=1,\ldots,n$, where $i_{p_1},\ldots,i_{p_n}$ be independent bernoulli random variables independent of $x_{\lambda_i}$'s, with ${\rm e}[i_{p_i}]=p_i$, $i=1,\ldots,n$. in actuarial sciences, $y_i$ corresponds to the claim amount in a portfolio of risks. this paper considers comparing the smallest claim amounts from two sets of interdependent portfolios, in the sense of usual and likelihood ratio orders, when the variables in one set have the parameters $\lambda_1,\ldots,\lambda_n$ and $p_1,\ldots,p_n$ and the variables in the other set have the parameters $\lambda^{*}_1,\ldots,\lambda^{*}_n$ and $p^*_1,\ldots,p^*_n$. also, we present some bounds for survival function of the smallest claim amount in a portfolio. to illustrate validity of the results, we serve some applicable models.
recycled least squares estimation in nonlinear regression
we consider a resampling scheme for parameters estimates in nonlinear regression models. we provide an estimation procedure which recycles, via random weighting, the relevant parameters estimates to construct consistent estimates of the sampling distribution of the various estimates. we establish the asymptotic normality of the resampled estimates and demonstrate the applicability of the recycling approach in a small simulation study and via example.
learning latent subspaces in variational autoencoders
variational autoencoders (vaes) are widely used deep generative models capable of learning unsupervised latent representations of data. such representations are often difficult to interpret or control. we consider the problem of unsupervised learning of features correlated to specific labels in a dataset. we propose a vae-based generative model which we show is capable of extracting features correlated to binary labels in the data and structuring it in a latent subspace which is easy to interpret. our model, the conditional subspace vae (csvae), uses mutual information minimization to learn a low-dimensional latent subspace associated with each label that can easily be inspected and independently manipulated. we demonstrate the utility of the learned representations for attribute manipulation tasks on both the toronto face and celeba datasets.
inter-sentence relation extraction for associating biological context with events in biomedical texts
we present an analysis of the problem of identifying biological context and associating it with biochemical events in biomedical texts. this constitutes a non-trivial, inter-sentential relation extraction task. we focus on biological context as descriptions of the species, tissue type and cell type that are associated with biochemical events. we describe the properties of an annotated corpus of context-event relations and present and evaluate several classifiers for context-event association trained on syntactic, distance and frequency features.
sequential multiple structural damage detection and localization: a distributed approach
as essential components of the modern urban system, the health conditions of civil structures are the foundation of urban system sustainability and need to be continuously monitored. in structural health monitoring (shm), many existing works will have limited performance in the sequential damage diagnosis process because 1) the damage events needs to be reported with short delay, 2) multiple damage locations have to be identified simultaneously, and 3) the computational complexity is intractable in large-scale wireless sensor networks (wsns). to address these drawbacks, we propose a new damage identification approach that utilizes the time-series of damage sensitive features extracted from multiple sensors' measurements and the optimal change point detection theory to find damage occurrence time and identify the number of damage locations. as the existing change point detection methods require to centralize the sensor data, which is impracticable in many applications, we use the probabilistic graphical model to formulate wsns and the targeting structure and propose a distributed algorithm for structural damage identification. validation results show highly accurate damage identification in a shake table experiment and american society of civil engineers benchmark structure. also, we demonstrate that the detection delay is reduced significantly by utilizing multiple sensors' data.
causal identification under markov equivalence
assessing the magnitude of cause-and-effect relations is one of the central challenges found throughout the empirical sciences. the problem of identification of causal effects is concerned with determining whether a causal effect can be computed from a combination of observational data and substantive knowledge about the domain under investigation, which is formally expressed in the form of a causal graph. in many practical settings, however, the knowledge available for the researcher is not strong enough so as to specify a unique causal graph. another line of investigation attempts to use observational data to learn a qualitative description of the domain called a markov equivalence class, which is the collection of causal graphs that share the same set of observed features. in this paper, we marry both approaches and study the problem of causal identification from an equivalence class, represented by a partial ancestral graph (pag). we start by deriving a set of graphical properties of pags that are carried over to its induced subgraphs. we then develop an algorithm to compute the effect of an arbitrary set of variables on an arbitrary outcome set. we show that the algorithm is strictly more powerful than the current state of the art found in the literature.
adding constraints to bayesian inverse problems
using observation data to estimate unknown parameters in computational models is broadly important. this task is often challenging because solutions are non-unique due to the complexity of the model and limited observation data. however, the parameters or states of the model are often known to satisfy additional constraints beyond the model. thus, we propose an approach to improve parameter estimation in such inverse problems by incorporating constraints in a bayesian inference framework. constraints are imposed by constructing a likelihood function based on fitness of the solution to the constraints. the posterior distribution of the parameters conditioned on (1) the observed data and (2) satisfaction of the constraints is obtained, and the estimate of the parameters is given by the maximum a posteriori estimation or posterior mean. both equality and inequality constraints can be considered by this framework, and the strictness of the constraints can be controlled by constraint uncertainty denoting a confidence on its correctness. furthermore, we extend this framework to an approximate bayesian inference framework in terms of the ensemble kalman filter method, where the constraint is imposed by re-weighing the ensemble members based on the likelihood function. a synthetic model is presented to demonstrate the effectiveness of the proposed method and in both the exact bayesian inference and ensemble kalman filter scenarios, numerical simulations show that imposing constraints using the method presented improves identification of the true parameter solution among multiple local minima.
balanced linear contextual bandits
contextual bandit algorithms are sensitive to the estimation method of the outcome model as well as the exploration method used, particularly in the presence of rich heterogeneity or complex outcome models, which can lead to difficult estimation problems along the path of learning. we develop algorithms for contextual bandits with linear payoffs that integrate balancing methods from the causal inference literature in their estimation to make it less prone to problems of estimation bias. we provide the first regret bound analyses for linear contextual bandits with balancing and show that our algorithms match the state of the art theoretical guarantees. we demonstrate the strong practical advantage of balanced contextual bandits on a large number of supervised learning datasets and on a synthetic example that simulates model misspecification and prejudice in the initial training data.
domain-to-domain translation model for recommender system
recently multi-domain recommender systems have received much attention from researchers because they can solve cold-start problem as well as support for cross-selling. however, when applying into multi-domain items, although algorithms specifically addressing a single domain have many difficulties in capturing the specific characteristics of each domain, multi-domain algorithms have less opportunity to obtain similar features among domains. because both similarities and differences exist among domains, multi-domain models must capture both to achieve good performance. other studies of multi-domain systems merely transfer knowledge from the source domain to the target domain, so the source domain usually comes from external factors such as the search query or social network, which is sometimes impossible to obtain. to handle the two problems, we propose a model that can extract both homogeneous and divergent features among domains and extract data in a domain can support for other domain equally: a so-called domain-to-domain translation model (d2d-tm). it is based on generative adversarial networks (gans), variational autoencoders (vaes), and cycle-consistency (cc) for weight-sharing. we use the user interaction history of each domain as input and extract latent features through a vae-gan-cc network. experiments underscore the effectiveness of the proposed system over state-of-the-art methods by a large margin.
algorithmic theory of odes and sampling from well-conditioned logconcave densities
sampling logconcave functions arising in statistics and machine learning has been a subject of intensive study. recent developments include analyses for langevin dynamics and hamiltonian monte carlo (hmc). while both approaches have dimension-independent bounds for the underlying $\mathit{continuous}$ processes under sufficiently strong smoothness conditions, the resulting discrete algorithms have complexity and number of function evaluations growing with the dimension. motivated by this problem, in this paper, we give a general algorithm for solving multivariate ordinary differential equations whose solution is close to the span of a known basis of functions (e.g., polynomials or piecewise polynomials). the resulting algorithm has polylogarithmic depth and essentially tight runtime - it is nearly linear in the size of the representation of the solution. we apply this to the sampling problem to obtain a nearly linear implementation of hmc for a broad class of smooth, strongly logconcave densities, with the number of iterations (parallel depth) and gradient evaluations being $\mathit{polylogarithmic}$ in the dimension (rather than polynomial as in previous work). this class includes the widely-used loss function for logistic regression with incoherent weight matrices and has been subject of much study recently. we also give a faster algorithm with $ \mathit{polylogarithmic~depth}$ for the more general and standard class of strongly convex functions with lipschitz gradient. these results are based on (1) an improved contraction bound for the exact hmc process and (2) logarithmic bounds on the degree of polynomials that approximate solutions of the differential equations arising in implementing hmc.
flatten-t swish: a thresholded relu-swish-like activation function for deep learning
activation functions are essential for deep learning methods to learn and perform complex tasks such as image classification. rectified linear unit (relu) has been widely used and become the default activation function across the deep learning community since 2012. although relu has been popular, however, the hard zero property of the relu has heavily hindered the negative values from propagating through the network. consequently, the deep neural network has not been benefited from the negative representations. in this work, an activation function called flatten-t swish (fts) that leverage the benefit of the negative values is proposed. to verify its performance, this study evaluates fts with relu and several recent activation functions. each activation function is trained using mnist dataset on five different deep fully connected neural networks (dfnns) with depth vary from five to eight layers. for a fair evaluation, all dfnns are using the same configuration settings. based on the experimental results, fts with a threshold value, t=-0.20 has the best overall performance. as compared with relu, fts (t=-0.20) improves mnist classification accuracy by 0.13%, 0.70%, 0.67%, 1.07% and 1.15% on wider 5 layers, slimmer 5 layers, 6 layers, 7 layers and 8 layers dfnns respectively. apart from this, the study also noticed that fts converges twice as fast as relu. although there are other existing activation functions are also evaluated, this study elects relu as the baseline activation function.
consistent estimation of residual variance with random forest out-of-bag errors
the issue of estimating residual variance in regression models has experienced relatively little attention in the machine learning community. however, the estimate is of primary interest in many practical applications, e.g. as a primary step towards the construction of prediction intervals. here, we consider this issue for the random forest. therein, the functional relationship between covariates and response variable is modeled by a weighted sum of the latter. the dependence structure is, however, involved in the weights that are constructed during the tree construction process making the model complex in mathematical analysis. restricting to l2-consistent random forest models, we provide random forest based residual variance estimators and prove their consistency.
efficient structured matrix recovery and nearly-linear time algorithms for solving inverse symmetric $m$-matrices
in this paper we show how to recover a spectral approximations to broad classes of structured matrices using only a polylogarithmic number of adaptive linear measurements to either the matrix or its inverse. leveraging this result we obtain faster algorithms for variety of linear algebraic problems. key results include: $\bullet$ a nearly linear time algorithm for solving the inverse of symmetric $m$-matrices, a strict superset of laplacians and sdd matrices. $\bullet$ an $\tilde{o}(n^2)$ time algorithm for solving $n \times n$ linear systems that are constant spectral approximations of laplacians or more generally, sdd matrices. $\bullet$ an $\tilde{o}(n^2)$ algorithm to recover a spectral approximation of a $n$-vertex graph using only $\tilde{o}(1)$ matrix-vector multiplies with its laplacian matrix. the previous best results for each problem either used a trivial number of queries to exactly recover the matrix or a trivial $o(n^\omega)$ running time, where $\omega$ is the matrix multiplication constant. we achieve these results by generalizing recent semidefinite programming based linear sized sparsifier results of lee and sun (2017) and providing iterative methods inspired by the semistreaming sparsification results of kapralov, lee, musco, musco and sidford (2014) and input sparsity time linear system solving results of li, miller, and peng (2013). we hope that by initiating study of these natural problems, expanding the robustness and scope of recent nearly linear time linear system solving research, and providing general matrix recovery machinery this work may serve as a stepping stone for faster algorithms.
generative adversarial networks for generation and classification of physical rehabilitation movement episodes
this article proposes a method for mathematical modeling of human movements related to patient exercise episodes performed during physical therapy sessions by using artificial neural networks. the generative adversarial network structure is adopted, whereby a discriminative and a generative model are trained concurrently in an adversarial manner. different network architectures are examined, with the discriminative and generative models structured as deep subnetworks of hidden layers comprised of convolutional or recurrent computational units. the models are validated on a data set of human movements recorded with an optical motion tracker. the results demonstrate an ability of the networks for classification of new instances of motions, and for generation of motion examples that resemble the recorded motion sequences.
a bandit approach to maximum inner product search
there has been substantial research on sub-linear time approximate algorithms for maximum inner product search (mips). to achieve fast query time, state-of-the-art techniques require significant preprocessing, which can be a burden when the number of subsequent queries is not sufficiently large to amortize the cost. furthermore, existing methods do not have the ability to directly control the suboptimality of their approximate results with theoretical guarantees. in this paper, we propose the first approximate algorithm for mips that does not require any preprocessing, and allows users to control and bound the suboptimality of the results. we cast mips as a best arm identification problem, and introduce a new bandit setting that can fully exploit the special structure of mips. our approach outperforms state-of-the-art methods on both synthetic and real-world datasets.
bernoulli ballot polling: a manifest improvement for risk-limiting audits
we present a method and software for ballot-polling risk-limiting audits (rlas) based on bernoulli sampling: ballots are included in the sample with probability $p$, independently. bernoulli sampling has several advantages: (1) it does not require a ballot manifest; (2) it can be conducted independently at different locations, rather than requiring a central authority to select the sample from the whole population of cast ballots or requiring stratified sampling; (3) it can start in polling places on election night, before margins are known. if the reported margins for the 2016 u.s. presidential election are correct, a bernoulli ballot-polling audit with a risk limit of 5% and a sampling rate of $p_0 = 1\%$ would have had at least a 99% probability of confirming the outcome in 42 states. (the other states were more likely to have needed to examine additional ballots.) logistical and security advantages that auditing in the polling place affords may outweigh the cost of examining more ballots than some other methods might require.
pac learning guarantees under covariate shift
we consider the domain adaptation problem, also known as the covariate shift problem, where the distributions that generate the training and test data differ while retaining the same labeling function. this problem occurs across a large range of practical applications, and is related to the more general challenge of transfer learning. most recent work on the topic focuses on optimization techniques that are specific to an algorithm or practical use case rather than a more general approach. the sparse literature attempting to provide general bounds seems to suggest that efficient learning even under strong assumptions is not possible for covariate shift. our main contribution is to recontextualize these results by showing that any probably approximately correct (pac) learnable concept class is still pac learnable under covariate shift conditions with only a polynomial increase in the number of training samples. this approach essentially demonstrates that the domain adaptation learning problem is as hard as the underlying pac learning problem, provided some conditions over the training and test distributions. we also present bounds for the rejection sampling algorithm, justifying it as a solution to the domain adaptation problem in certain scenarios.
connecting spectral clustering to maximum margins and level sets
we study the connections between spectral clustering and the problems of maximum margin clustering, and estimation of the components of level sets of a density function. specifically, we obtain bounds on the eigenvectors of graph laplacian matrices in terms of the between cluster separation, and within cluster connectivity. these bounds ensure that the spectral clustering solution converges to the maximum margin clustering solution as the scaling parameter is reduced towards zero. the sensitivity of maximum margin clustering solutions to outlying points is well known, but can be mitigated by first removing such outliers, and applying maximum margin clustering to the remaining points. if outliers are identified using an estimate of the underlying probability density, then the remaining points may be seen as an estimate of a level set of this density function. we show that such an approach can be used to consistently estimate the components of the level sets of a density function under very mild assumptions.
human pose and path estimation from aerial video using dynamic classifier selection
we consider the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. we present a preliminary solution whose distinguishing feature is a dynamic classifier selection architecture. in our solution, each video frame is corrected for perspective using projective transformation. then, two alternative feature sets are used: (i) histogram of oriented gradients (hog) of the silhouette, (ii) convolutional neural network (cnn) features of the rgb image. the features (hog or cnn) are classified using a dynamic classifier. a class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. our solution provides three main advantages: (i) classification is efficient due to dynamic selection (4-class vs. 64-class classification). (ii) classification errors are confined to neighbors of the true view-points. (iii) the robust temporal relationship between poses is used to resolve the left-right ambiguities of human silhouettes. experiments conducted on both fronto-parallel videos and aerial videos confirm our solution can achieve accurate pose and trajectory estimation for both scenarios. we found using hog features provides higher accuracy than using cnn features. for example, applying the hog-based variant of our scheme to the 'walking on a figure 8-shaped path' dataset (1652 frames) achieved estimation accuracies of 99.6% for viewpoints and 96.2% for number of poses.
a logarithmic barrier method for proximal policy optimization
proximal policy optimization(ppo) has been proposed as a first-order optimization method for reinforcement learning. we should notice that an exterior penalty method is used in it. often, the minimizers of the exterior penalty functions approach feasibility only in the limits as the penalty parameter grows increasingly large. therefore, it may result in the low level of sampling efficiency. this method, which we call proximal policy optimization with barrier method (ppo-b), keeps almost all advantageous spheres of ppo such as easy implementation and good generalization. specifically, a new surrogate objective with interior penalty method is proposed to avoid the defect arose from exterior penalty method. conclusions can be draw that ppo-b is able to outperform ppo in terms of sampling efficiency since ppo-b achieved clearly better performance on atari and mujoco environment than ppo.
higher-order spectral clustering under superimposed stochastic block model
higher-order motif structures and multi-vertex interactions are becoming increasingly important in studies that aim to improve our understanding of functionalities and evolution patterns of networks. to elucidate the role of higher-order structures in community detection problems over complex networks, we introduce the notion of a superimposed stochastic block model (supsbm). the model is based on a random graph framework in which certain higher-order structures or subgraphs are generated through an independent hyperedge generation process, and are then replaced with graphs that are superimposed with directed or undirected edges generated by an inhomogeneous random graph model. consequently, the model introduces controlled dependencies between edges which allow for capturing more realistic network phenomena, namely strong local clustering in a sparse network, short average path length, and community structure. we proceed to rigorously analyze the performance of a number of recently proposed higher-order spectral clustering methods on the supsbm. in particular, we prove non-asymptotic upper bounds on the misclustering error of spectral community detection for a supsbm setting in which triangles or 3-uniform hyperedges are superimposed with undirected edges. as part of our analysis, we also derive new bounds on the misclustering error of higher-order spectral clustering methods for the standard sbm and the 3-uniform hypergraph sbm. furthermore, for a non-uniform hypergraph sbm model in which one directly observes both edges and 3-uniform hyperedges, we obtain a criterion that describes when to perform spectral clustering based on edges and when on hyperedges, based on a function of hyperedge density and observation quality.
$\ell_0$-motivated low-rank sparse subspace clustering
in many applications, high-dimensional data points can be well represented by low-dimensional subspaces. to identify the subspaces, it is important to capture a global and local structure of the data which is achieved by imposing low-rank and sparseness constraints on the data representation matrix. in low-rank sparse subspace clustering (lrssc), nuclear and $\ell_1$ norms are used to measure rank and sparsity. however, the use of nuclear and $\ell_1$ norms leads to an overpenalized problem and only approximates the original problem. in this paper, we propose two $\ell_0$ quasi-norm based regularizations. first, the paper presents regularization based on multivariate generalization of minimax-concave penalty (gmc-lrssc), which contains the global minimizers of $\ell_0$ quasi-norm regularized objective. afterward, we introduce the schatten-0 ($s_0$) and $\ell_0$ regularized objective and approximate the proximal map of the joint solution using a proximal average method ($s_0/\ell_0$-lrssc). the resulting nonconvex optimization problems are solved using alternating direction method of multipliers with established convergence conditions of both algorithms. results obtained on synthetic and four real-world datasets show the effectiveness of gmc-lrssc and $s_0/\ell_0$-lrssc when compared to state-of-the-art methods.
computational eeg in personalized medicine: a study in parkinson's disease
recordings of electrical brain activity carry information about a person's cognitive health. for recording eeg signals, a very common setting is for a subject to be at rest with its eyes closed. analysis of these recordings often involve a dimensionality reduction step in which electrodes are grouped into 10 or more regions (depending on the number of electrodes available). then an average over each group is taken which serves as a feature in subsequent evaluation. currently, the most prominent features used in clinical practice are based on spectral power densities. in our work we consider a simplified grouping of electrodes into two regions only. in addition to spectral features we introduce a secondary, non-redundant view on brain activity through the lens of tsallis entropy $s_{q=2}$. we further take eeg measurements not only in an eyes closed (ec) but also in an eyes open (eo) state. for our cohort of healthy controls (hc) and individuals suffering from parkinson's disease (pd), the question we are asking is the following: how well can one discriminate between hc and pd within this simplified, binary grouping? this question is motivated by the commercial availability of inexpensive and easy to use portable eeg devices. if enough information is retained in this binary grouping, then such simple devices could potentially be used as personal monitoring tools, as standard screening tools by general practitioners or as digital biomarkers for easy long term monitoring during neurological studies.
learning student networks via feature embedding
deep convolutional neural networks have been widely used in numerous applications, but their demanding storage and computational resource requirements prevent their applications on mobile devices. knowledge distillation aims to optimize a portable student network by taking the knowledge from a well-trained heavy teacher network. traditional teacher-student based methods used to rely on additional fully-connected layers to bridge intermediate layers of teacher and student networks, which brings in a large number of auxiliary parameters. in contrast, this paper aims to propagate information from teacher to student without introducing new variables which need to be optimized. we regard the teacher-student paradigm from a new perspective of feature embedding. by introducing the locality preserving loss, the student network is encouraged to generate the low-dimensional features which could inherit intrinsic properties of their corresponding high-dimensional features from teacher network. the resulting portable network thus can naturally maintain the performance as that of the teacher network. theoretical analysis is provided to justify the lower computation complexity of the proposed method. experiments on benchmark datasets and well-trained networks suggest that the proposed algorithm is superior to state-of-the-art teacher-student learning methods in terms of computational and storage complexity.
semi-supervised mp-mri data synthesis with stitchlayer and auxiliary distance maximization
in this paper, we address the problem of synthesizing multi-parameter magnetic resonance imaging (mp-mri) data, i.e. apparent diffusion coefficients (adc) and t2-weighted (t2w), containing clinically significant (cs) prostate cancer (pca) via semi-supervised adversarial learning. specifically, our synthesizer generates mp-mri data in a sequential manner: first generating adc maps from 128-d latent vectors, followed by translating them to the t2w images. the synthesizer is trained in a semisupervised manner. in the supervised training process, a limited amount of paired adc-t2w images and the corresponding adc encodings are provided and the synthesizer learns the paired relationship by explicitly minimizing the reconstruction losses between synthetic and real images. to avoid overfitting limited adc encodings, an unlimited amount of random latent vectors and unpaired adc-t2w images are utilized in the unsupervised training process for learning the marginal image distributions of real images. to improve the robustness of synthesizing, we decompose the difficult task of generating full-size images into several simpler tasks which generate sub-images only. a stitchlayer is then employed to fuse sub-images together in an interlaced manner into a full-size image. to enforce the synthetic images to indeed contain distinguishable cs pca lesions, we propose to also maximize an auxiliary distance of jensen-shannon divergence (jsd) between cs and noncs images. experimental results show that our method can effectively synthesize a large variety of mpmri images which contain meaningful cs pca lesions, display a good visual quality and have the correct paired relationship. compared to the state-of-the-art synthesis methods, our method achieves a significant improvement in terms of both visual and quantitative evaluation metrics.
designing adversarially resilient classifiers using resilient feature engineering
we provide a methodology, resilient feature engineering, for creating adversarially resilient classifiers. according to existing work, adversarial attacks identify weakly correlated or non-predictive features learned by the classifier during training and design the adversarial noise to utilize these features. therefore, highly predictive features should be used first during classification in order to determine the set of possible output labels. our methodology focuses the problem of designing resilient classifiers into a problem of designing resilient feature extractors for these highly predictive features. we provide two theorems, which support our methodology. the serial composition resilience and parallel composition resilience theorems show that the output of adversarially resilient feature extractors can be combined to create an equally resilient classifier. based on our theoretical results, we outline the design of an adversarially resilient classifier.
not using the car to see the sidewalk: quantifying and controlling the effects of context in classification and segmentation
importance of visual context in scene understanding tasks is well recognized in the computer vision community. however, to what extent the computer vision models for image classification and semantic segmentation are dependent on the context to make their predictions is unclear. a model overly relying on context will fail when encountering objects in context distributions different from training data and hence it is important to identify these dependencies before we can deploy the models in the real-world. we propose a method to quantify the sensitivity of black-box vision models to visual context by editing images to remove selected objects and measuring the response of the target models. we apply this methodology on two tasks, image classification and semantic segmentation, and discover undesirable dependency between objects and context, for example that "sidewalk" segmentation relies heavily on "cars" being present in the image. we propose an object removal based data augmentation solution to mitigate this dependency and increase the robustness of classification and segmentation models to contextual variations. our experiments show that the proposed data augmentation helps these models improve the performance in out-of-context scenarios, while preserving the performance on regular data.
heuristics for efficient sparse blind source separation
sparse blind source separation (sparse bss) is a key method to analyze multichannel data in fields ranging from medical imaging to astrophysics. however, since it relies on seeking the solution of a non-convex penalized matrix factorization problem, its performances largely depend on the optimization strategy. in this context, proximal alternating linearized minimization (palm) has become a standard algorithm which, despite its theoretical grounding, generally provides poor practical separation results. in this work, we propose a novel strategy that combines a heuristic approach with palm. we show its relevance on realistic astrophysical data.
bayesian optimization in alphago
during the development of alphago, its many hyper-parameters were tuned with bayesian optimization multiple times. this automatic tuning process resulted in substantial improvements in playing strength. for example, prior to the match with lee sedol, we tuned the latest alphago agent and this improved its win-rate from 50% to 66.5% in self-play games. this tuned version was deployed in the final match. of course, since we tuned alphago many times during its development cycle, the compounded contribution was even higher than this percentage. it is our hope that this brief case study will be of interest to go fans, and also provide bayesian optimization practitioners with some insights and inspiration.
briarpatches: pixel-space interventions for inducing demographic parity
we introduce the briarpatch, a pixel-space intervention that obscures sensitive attributes from representations encoded in pre-trained classifiers. the patches encourage internal model representations not to encode sensitive information, which has the effect of pushing downstream predictors towards exhibiting demographic parity with respect to the sensitive information. the net result is that these briarpatches provide an intervention mechanism available at user level, and complements prior research on fair representations that were previously only applicable by model developers and ml experts.
generalizations of ripley's k-function with application to space curves
the intensity function and ripley's k-function have been used extensively in the literature to describe the first and second moment structure of spatial point sets. this has many applications including describing the statistical structure of synaptic vesicles. some attempts have been made to extend ripley's k-function to curve pieces. such an extension can be used to describe the statistical structure of muscle fibers and brain fiber tracks. in this paper, we take a computational perspective and construct new and very general variants of ripley's k-function for curves pieces, surface patches etc. we discuss the method from [chiu, stoyan, kendall, & mecke 2013] and compare it with our generalizations theoretically, and we give examples demonstrating the difference in their ability to separate sets of curve pieces.
user association and load balancing for massive mimo through deep learning
this work investigates the use of deep learning to perform user cell association for sum-rate maximization in massive mimo networks. it is shown how a deep neural network can be trained to approach the optimal association rule with a much more limited computational complexity, thus enabling to update the association rule in real-time, on the basis of the mobility patterns of users. in particular, the proposed neural network design requires as input only the users' geographical positions. numerical results show that it guarantees the same performance of traditional optimization-oriented methods.
multi instance learning for unbalanced data
in the context of multi instance learning, we analyze the single instance (si) learning objective. we show that when the data is unbalanced and the family of classifiers is sufficiently rich, the si method is a useful learning algorithm. in particular, we show that larger data imbalance, a quality that is typically perceived as negative, in fact implies a better resilience of the algorithm to the statistical dependencies of the objects in bags. in addition, our results shed new light on some known issues with the si method in the setting of linear classifiers, and we show that these issues are significantly less likely to occur in the setting of neural networks. we demonstrate our results on a synthetic dataset, and on the coco dataset for the problem of patch classification with weak image level labels derived from captions.
robustness of the sobol' indices to marginal distribution uncertainty
global sensitivity analysis (gsa) quantifies the influence of uncertain variables in a mathematical model. the sobol' indices, a commonly used tool in gsa, seek to do this by attributing to each variable its relative contribution to the variance of the model output. in order to compute sobol' indices, the user must specify a probability distribution for the uncertain variables. this distribution is typically unknown and must be chosen using limited data and/or knowledge. the usefulness of the sobol' indices depends on their robustness to this distributional uncertainty. this article presents a novel method which uses "optimal perturbations" of the marginal probability density functions to analyze the robustness of the sobol' indices. the method is illustrated through synthetic examples and a model for contaminant transport.
channel-wise pruning of neural networks with tapering resource constraint
neural network pruning is an important step in design process of efficient neural networks for edge devices with limited computational power. pruning is a form of knowledge transfer from the weights of the original network to a smaller target subnetwork. we propose a new method for compute-constrained structured channel-wise pruning of convolutional neural networks. the method iteratively fine-tunes the network, while gradually tapering the computation resources available to the pruned network via a holonomic constraint in the method of lagrangian multipliers framework. an explicit and adaptive automatic control over the rate of tapering is provided. the trainable parameters of our pruning method are separate from the weights of the neural network, which allows us to avoid the interference with the neural network solver (e.g. avoid the direct dependence of pruning speed on neural network learning rates). our method combines the `rigoristic' approach by the direct application of constrained optimization, avoiding the pitfalls of admm-based methods, like their need to define the target amount of resources for each pruning run, and direct dependence of pruning speed and priority of pruning on the relative scale of weights between layers. for vgg-16 @ ilsvrc-2012, we achieve reduction of 15.47 -> 3.87 gmac with only 1% top-1 accuracy reduction (68.4% -> 67.4%). for alexnet @ ilsvrc-2012, we achieve 0.724 -> 0.411 gmac with 1% top-1 accuracy reduction (56.8% -> 55.8%).
an empiric-stochastic approach, based on normalization parameters, to simulate solar irradiance
the data acquisition of solar radiation in a locality is essential for the development of efficient designs of systems, whose operation is based on solar energy. this paper presents a methodology to estimate solar irradiance using an empiric-stochastic approach, which consists of the computation of normalization parameters from solar irradiance data. for this study, solar irradiance data was collected with a weather station during a year. post-treatment included a trimmed moving average, to smooth the data, the performance a fitting procedure using a simple model, to recover normalization parameters, and the estimation of a probability density map by means of a kernel density estimation method. the normalization parameters and the probability density map allowed us to build an empiric-stochastic methodology that generates an estimate of the solar irradiance. in order to validate our method, simulated solar irradiance has been used to compute the theoretical generation of solar power, which in turn has been compared to experimental data, retrieved from a commercial photovoltaic system. since the simulation results show a good agreement has been with the experimental data, this simple methodology can estimate the solar power production and may help consumers to design and test a photovoltaic system before installation.
fuzzy hashing as perturbation-consistent adversarial kernel embedding
measuring the similarity of two files is an important task in malware analysis, with fuzzy hash functions being a popular approach. traditional fuzzy hash functions are data agnostic: they do not learn from a particular dataset how to determine similarity; their behavior is fixed across all datasets. in this paper, we demonstrate that fuzzy hash functions can be learned in a novel minimax training framework and that these learned fuzzy hash functions outperform traditional fuzzy hash functions at the file similarity task for portable executable files. in our approach, hash digests can be extracted from the kernel embeddings of two kernel networks, trained in a minimax framework, where the roles of players during training (i.e adversary versus generator) alternate along with the input data. we refer to this new minimax architecture as perturbation-consistent. the similarity score for a pair of files is the utility of the minimax game in equilibrium. our experiments show that learned fuzzy hash functions generalize well, capable of determining that two files are similar even when one of those files was generated using insertion and deletion operations.
deep learning with attention to predict gestational age of the fetal brain
fetal brain imaging is a cornerstone of prenatal screening and early diagnosis of congenital anomalies. knowledge of fetal gestational age is the key to the accurate assessment of brain development. this study develops an attention-based deep learning model to predict gestational age of the fetal brain. the proposed model is an end-to-end framework that combines key insights from multi-view mri including axial, coronal, and sagittal views. the model also uses age-activated weakly-supervised attention maps to enable rotation-invariant localization of the fetal brain among background noise. we evaluate our methods on the collected fetal brain mri cohort with a large age distribution from 125 to 273 days. our extensive experiments show age prediction performance with r2 = 0.94 using multi-view mri and attention.
style transfer and extraction for the handwritten letters using deep learning
how can we learn, transfer and extract handwriting styles using deep neural networks? this paper explores these questions using a deep conditioned autoencoder on the iron-off handwriting data-set. we perform three experiments that systematically explore the quality of our style extraction procedure. first, we compare our model to handwriting benchmarks using multidimensional performance metrics. second, we explore the quality of style transfer, i.e. how the model performs on new, unseen writers. in both experiments, we improve the metrics of state of the art methods by a large margin. lastly, we analyze the latent space of our model, and we see that it separates consistently writing styles.
retinal vessel segmentation based on fully convolutional neural networks
the retinal vascular condition is a reliable biomarker of several ophthalmologic and cardiovascular diseases, so automatic vessel segmentation may be crucial to diagnose and monitor them. in this paper, we propose a novel method that combines the multiscale analysis provided by the stationary wavelet transform with a multiscale fully convolutional neural network to cope with the varying width and direction of the vessel structure in the retina. our proposal uses rotation operations as the basis of a joint strategy for both data augmentation and prediction, which allows us to explore the information learned during training to refine the segmentation. the method was evaluated on three publicly available databases, achieving an average accuracy of 0.9576, 0.9694, and 0.9653, and average area under the roc curve of 0.9821, 0.9905, and 0.9855 on the drive, stare, and chase_db1 databases, respectively. it also appears to be robust to the training set and to the inter-rater variability, which shows its potential for real-world applications.
a unifying framework of high-dimensional sparse estimation with difference-of-convex (dc) regularizations
under the linear regression framework, we study the variable selection problem when the underlying model is assumed to have a small number of nonzero coefficients (i.e., the underlying linear model is sparse). non-convex penalties in specific forms are well-studied in the literature for sparse estimation. a recent work \cite{ahn2016difference} has pointed out that nearly all existing non-convex penalties can be represented as difference-of-convex (dc) functions, which can be expressed as the difference of two convex functions, while itself may not be convex. there is a large existing literature on the optimization problems when their objectives and/or constraints involve dc functions. efficient numerical solutions have been proposed. under the dc framework, directional-stationary (d-stationary) solutions are considered, and they are usually not unique. in this paper, we show that under some mild conditions, a certain subset of d-stationary solutions in an optimization problem (with a dc objective) has some ideal statistical properties: namely, asymptotic estimation consistency, asymptotic model selection consistency, asymptotic efficiency. the aforementioned properties are the ones that have been proven by many researchers for a range of proposed non-convex penalties in the sparse estimation. our assumptions are either weaker than or comparable with those conditions that have been adopted in other existing works. this work shows that dc is a nice framework to offer a unified approach to these existing work where non-convex penalty is involved. our work bridges the communities of optimization and statistics.
globalness detection in online social network
classification problems have made significant progress due to the maturity of artificial intelligence (ai). however, differentiating items from categories without noticeable boundaries is still a huge challenge for machines -- which is also crucial for machines to be intelligent. in order to study the fuzzy concept on classification, we define and propose a globalness detection with the four-stage operational flow. we then demonstrate our framework on facebook public pages inter-like graph with their geo-location. our prediction algorithm achieves high precision (89%) and recall (88%) of local pages. we evaluate the results on both states and countries level, finding that the global node ratios are relatively high in those states (ny, ca) having large and international cities. several global nodes examples have also been shown and studied in this paper. it is our hope that our results unveil the perfect value from every classification problem and provide a better understanding of global and local nodes in online social networks (osns).
anomaly detection and interpretation using multimodal autoencoder and sparse optimization
automated anomaly detection is essential for managing information and communications technology (ict) systems to maintain reliable services with minimum burden on operators. for detecting varying and continually emerging anomalies as differences from normal states, learning normal relationships inherent among cross-domain data monitored from ict systems is essential. deep-learning-based anomaly detection using an autoencoder (ae) is therefore promising for such complicated learning; however, its interpretation is still problematic. since the dimensions of the input data contributing to the detected anomaly are not directly indicated in an ae, they are not suitable for localizing anomalies in large ict systems composed of a huge amount of equipment. we propose an algorithm using sparse optimization for estimating contributing dimensions to anomalies detected with aes. we also propose a multimodal ae (mae) for effectively learning the relationships among cross-domain data, which can induce nonlinearity and differences in learnability among data types. we evaluated our algorithms with several datasets including real measured data in comparison with conventional algorithms and confirmed the superiority of our estimation algorithm in specifying contributing dimensions of anomalous data and our mae in detecting anomalies in cross-domain data.
two birds with one network: unifying failure event prediction and time-to-failure modeling
one of the key challenges in predictive maintenance is to predict the impending downtime of an equipment with a reasonable prediction horizon so that countermeasures can be put in place. classically, this problem has been posed in two different ways which are typically solved independently: (1) remaining useful life (rul) estimation as a long-term prediction task to estimate how much time is left in the useful life of the equipment and (2) failure prediction (fp) as a short-term prediction task to assess the probability of a failure within a pre-specified time window. as these two tasks are related, performing them separately is sub-optimal and might results in inconsistent predictions for the same equipment. in order to alleviate these issues, we propose two methods: deep weibull model (dw-rnn) and multi-task learning (mtl-rnn). dw-rnn is able to learn the underlying failure dynamics by fitting weibull distribution parameters using a deep neural network, learned with a survival likelihood, without training directly on each task. while dw-rnn makes an explicit assumption on the data distribution, mtl-rnn exploits the implicit relationship between the long-term rul and short-term fp tasks to learn the underlying distribution. additionally, both our methods can leverage the non-failed equipment data for rul estimation. we demonstrate that our methods consistently outperform baseline rul methods that can be used for fp while producing consistent results for rul and fp. we also show that our methods perform at par with baselines trained on the objectives optimized for either of the two tasks.
interactive naming for explaining deep neural networks: a formative study
we consider the problem of explaining the decisions of deep neural networks for image recognition in terms of human-recognizable visual concepts. in particular, given a test set of images, we aim to explain each classification in terms of a small number of image regions, or activation maps, which have been associated with semantic concepts by a human annotator. this allows for generating summary views of the typical reasons for classifications, which can help build trust in a classifier and/or identify example types for which the classifier may not be trusted. for this purpose, we developed a user interface for "interactive naming," which allows a human annotator to manually cluster significant activation maps in a test set into meaningful groups called "visual concepts". the main contribution of this paper is a systematic study of the visual concepts produced by five human annotators using the interactive naming interface. in particular, we consider the adequacy of the concepts for explaining the classification of test-set images, correspondence of the concepts to activations of individual neurons, and the inter-annotator agreement of visual concepts. we find that a large fraction of the activation maps have recognizable visual concepts, and that there is significant agreement between the different annotators about their denotations. our work is an exploratory study of the interplay between machine learning and human recognition mediated by visualizations of the results of learning.
gaussian process mixtures for estimating heterogeneous treatment effects
we develop a gaussian-process mixture model for heterogeneous treatment effect estimation that leverages the use of transformed outcomes. the approach we will present attempts to improve point estimation and uncertainty quantification relative to past work that has used transformed variable related methods as well as traditional outcome modeling. earlier work on modeling treatment effect heterogeneity using transformed outcomes has relied on tree based methods such as single regression trees and random forests. under the umbrella of non-parametric models, outcome modeling has been performed using bayesian additive regression trees and various flavors of weighted single trees. these approaches work well when large samples are available, but suffer in smaller samples where results are more sensitive to model misspecification - our method attempts to garner improvements in inference quality via a correctly specified model rooted in bayesian non-parametrics. furthermore, while we begin with a model that assumes that the treatment assignment mechanism is known, an extension where it is learnt from the data is presented for applications to observational studies. our approach is applied to simulated and real data to demonstrate our theorized improvements in inference with respect to two causal estimands: the conditional average treatment effect and the average treatment effect. by leveraging our correctly specified model, we are able to more accurately estimate the treatment effects while reducing their variance.
autoencoder based architecture for fast & real time audio style transfer
recently, there has been great interest in the field of audio style transfer, where a stylized audio is generated by imposing the style of a reference audio on the content of a target audio. we improve on the current approaches which use neural networks to extract the content and the style of the audio signal and propose a new autoencoder based architecture for the task. this network generates a stylized audio for a content audio in a single forward pass. the proposed network architecture proves to be advantageous over the quality of audio produced and the time taken to train the network. the network is experimented on speech signals to confirm the validity of our proposal.
toward multimodal model-agnostic meta-learning
gradient-based meta-learners such as maml are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. one important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. in this paper, we augment maml with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. specifically, we propose a multimodal maml algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. we evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. the results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.
robust functional anova model with t-process
robust estimation approaches are of fundamental importance for statistical modelling. to reduce susceptibility to outliers, we propose a robust estimation procedure with t-process under functional anova model. besides common mean structure of the studied subjects, their personal characters are also informative, especially for prediction. we develop a prediction method to predict the individual effect. statistical properties, such as robustness and information consistency, are studied. numerical studies including simulation and real data examples show that the proposed method performs well.