title
stringlengths 4
343
| abstract
stringlengths 4
4.48k
|
---|---|
an introduction to deep reinforcement learning | deep reinforcement learning is the combination of reinforcement learning (rl) and deep learning. this field of research has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine. thus, deep rl opens up many new applications in domains such as healthcare, robotics, smart grids, finance, and many more. this manuscript provides an introduction to deep reinforcement learning models, algorithms and techniques. particular focus is on the aspects related to generalization and how deep rl can be used for practical applications. we assume the reader is familiar with basic machine learning concepts. |
eigenvalue corrected noisy natural gradient | variational bayesian neural networks combine the flexibility of deep learning with bayesian uncertainty estimation. however, inference procedures for flexible variational posteriors are computationally expensive. a recently proposed method, noisy natural gradient, is a surprisingly simple method to fit expressive posteriors by adding weight noise to regular natural gradient updates. noisy k-fac is an instance of noisy natural gradient that fits a matrix-variate gaussian posterior with minor changes to ordinary k-fac. nevertheless, a matrix-variate gaussian posterior does not capture an accurate diagonal variance. in this work, we extend on noisy k-fac to obtain a more flexible posterior distribution called eigenvalue corrected matrix-variate gaussian. the proposed method computes the full diagonal re-scaling factor in kronecker-factored eigenbasis. empirically, our approach consistently outperforms existing algorithms (e.g., noisy k-fac) on regression and classification tasks. |
are all training examples created equal? an empirical study | modern computer vision algorithms often rely on very large training datasets. however, it is conceivable that a carefully selected subsample of the dataset is sufficient for training. in this paper, we propose a gradient-based importance measure that we use to empirically analyze relative importance of training images in four datasets of varying complexity. we find that in some cases, a small subsample is indeed sufficient for training. for other datasets, however, the relative differences in importance are negligible. these results have important implications for active learning on deep networks. additionally, our analysis method can be used as a general tool to better understand diversity of training examples in datasets. |
rethinking clinical prediction: why machine learning must consider year of care and feature aggregation | machine learning for healthcare often trains models on de-identified datasets with randomly-shifted calendar dates, ignoring the fact that data were generated under hospital operation practices that change over time. these changing practices induce definitive changes in observed data which confound evaluations which do not account for dates and limit the generalisability of date-agnostic models. in this work, we establish the magnitude of this problem on mimic, a public hospital dataset, and showcase a simple solution. we augment mimic with the year in which care was provided and show that a model trained using standard feature representations will significantly degrade in quality over time. we find a deterioration of 0.3 auc when evaluating mortality prediction on data from 10 years later. we find a similar deterioration of 0.15 auc for length-of-stay. in contrast, we demonstrate that clinically-oriented aggregates of raw features significantly mitigate future deterioration. our suggested aggregated representations, when retrained yearly, have prediction quality comparable to year-agnostic models. |
time aggregation and model interpretation for deep multivariate longitudinal patient outcome forecasting systems in chronic ambulatory care | clinical data for ambulatory care, which accounts for 90% of the nations healthcare spending, is characterized by relatively small sample sizes of longitudinal data, unequal spacing between visits for each patient, with unequal numbers of data points collected across patients. while deep learning has become state-of-the-art for sequence modeling, it is unknown which methods of time aggregation may be best suited for these challenging temporal use cases. additionally, deep models are often considered uninterpretable by physicians which may prevent the clinical adoption, even of well performing models. we show that time-distributed-dense layers combined with grus produce the most generalizable models. furthermore, we provide a framework for the clinical interpretation of the models. |
active learning in recommendation systems with multi-level user preferences | while recommendation systems generally observe user behavior passively, there has been an increased interest in directly querying users to learn their specific preferences. in such settings, considering queries at different levels of granularity to optimize user information acquisition is crucial to efficiently providing a good user experience. in this work, we study the active learning problem with multi-level user preferences within the collective matrix factorization (cmf) framework. cmf jointly captures multi-level user preferences with respect to items and relations between items (e.g., book genre, cuisine type), generally resulting in improved predictions. motivated by finite-sample analysis of the cmf model, we propose a theoretically optimal active learning strategy based on the fisher information matrix and use this to derive a realizable approximation algorithm for practical recommendations. experiments are conducted using both the yelp dataset directly and an illustrative synthetic dataset in the three settings of personalized active learning, cold-start recommendations, and noisy data -- demonstrating strong improvements over several widely used active learning methods. |
two-sample test of community memberships of weighted stochastic block models | suppose two networks are observed for the same set of nodes, where each network is assumed to be generated from a weighted stochastic block model. this paper considers the problem of testing whether the community memberships of the two networks are the same. a test statistic based on singular subspace distance is developed. under the weighted stochastic block models with dense graphs, the limiting distribution of the proposed test statistic is developed. simulation results show that the test has correct empirical type 1 errors under the dense graphs. the test also behaves as expected in empirical power, showing gradual changes when the intra-block and inter-block distributions are close and achieving 1 when the two distributions are not so close, where the closeness of the two distributions is characterized by renyi divergence of order 1/2. the enron email networks are used to demonstrate the proposed test. |
adversarial examples as an input-fault tolerance problem | we analyze the adversarial examples problem in terms of a model's fault tolerance with respect to its input. whereas previous work focuses on arbitrarily strict threat models, i.e., $\epsilon$-perturbations, we consider arbitrary valid inputs and propose an information-based characteristic for evaluating tolerance to diverse input faults. |
an energy-efficient transaction model for the blockchain-enabled internet of vehicles (iov) | the blockchain is a safe, reliable and innovative mechanism for managing numerous vehicles seeking connectivity. however, following the principles of the blockchain, the number of transactions required to update ledgers pose serious issues for vehicles as these may consume the maximum available energy. to resolve this, an efficient model is presented in this letter which is capable of handling the energy demands of the blockchain-enabled internet of vehicles (iov) by optimally controlling the number of transactions through distributed clustering. numerical results suggest that the proposed approach is 40.16% better in terms of energy conservation and 82.06% better in terms of the number of transactions required to share the entire blockchain-data compared with the traditional blockchain. |
an interpretable model with globally consistent explanations for credit risk | we propose a possible solution to a public challenge posed by the fair isaac corporation (fico), which is to provide an explainable model for credit risk assessment. rather than present a black box model and explain it afterwards, we provide a globally interpretable model that is as accurate as other neural networks. our "two-layer additive risk model" is decomposable into subscales, where each node in the second layer represents a meaningful subscale, and all of the nonlinearities are transparent. we provide three types of explanations that are simpler than, but consistent with, the global model. one of these explanation methods involves solving a minimum set cover problem to find high-support globally-consistent explanations. we present a new online visualization tool to allow users to explore the global model and its explanations. |
adsas: comprehensive real-time anomaly detection system | since with massive data growth, the need for autonomous and generic anomaly detection system is increased. however, developing one stand-alone generic anomaly detection system that is accurate and fast is still a challenge. in this paper, we propose conventional time-series analysis approaches, the seasonal autoregressive integrated moving average (sarima) model and seasonal trend decomposition using loess (stl), to detect complex and various anomalies. usually, sarima and stl are used only for stationary and periodic time-series, but by combining, we show they can detect anomalies with high accuracy for data that is even noisy and non-periodic. we compared the algorithm to long short term memory (lstm), a deep-learning-based algorithm used for anomaly detection system. we used a total of seven real-world datasets and four artificial datasets with different time-series properties to verify the performance of the proposed algorithm. |
large datasets, bias and model oriented optimal design of experiments | we review recent literature that proposes to adapt ideas from classical model based optimal design of experiments to problems of data selection of large datasets. special attention is given to bias reduction and to protection against confounders. some new results are presented. theoretical and computational comparisons are made. |
practical methods for graph two-sample testing | hypothesis testing for graphs has been an important tool in applied research fields for more than two decades, and still remains a challenging problem as one often needs to draw inference from few replicates of large graphs. recent studies in statistics and learning theory have provided some theoretical insights about such high-dimensional graph testing problems, but the practicality of the developed theoretical methods remains an open question. in this paper, we consider the problem of two-sample testing of large graphs. we demonstrate the practical merits and limitations of existing theoretical tests and their bootstrapped variants. we also propose two new tests based on asymptotic distributions. we show that these tests are computationally less expensive and, in some cases, more reliable than the existing methods. |
measure, manifold, learning, and optimization: a theory of neural networks | we present a formal measure-theoretical theory of neural networks (nn) built on probability coupling theory. our main contributions are summarized as follows. * built on the formalism of probability coupling theory, we derive an algorithm framework, named hierarchical measure group and approximate system (hmgas), nicknamed s-system, that is designed to learn the complex hierarchical, statistical dependency in the physical world. * we show that nns are special cases of s-system when the probability kernels assume certain exponential family distributions. activation functions are derived formally. we further endow geometry on nns through information geometry, show that intermediate feature spaces of nns are stochastic manifolds, and prove that "distance" between samples is contracted as layers stack up. * s-system shows nns are inherently stochastic, and under a set of realistic boundedness and diversity conditions, it enables us to prove that for large size nonlinear deep nns with a class of losses, including the hinge loss, all local minima are global minima with zero loss errors, and regions around the minima are flat basins where all eigenvalues of hessians are concentrated around zero, using tools and ideas from mean field theory, random matrix theory, and nonlinear operator equations. * s-system, the information-geometry structure and the optimization behaviors combined completes the analog between renormalization group (rg) and nns. it shows that a nn is a complex adaptive system that estimates the statistic dependency of microscopic object, e.g., pixels, in multiple scales. unlike clear-cut physical quantity produced by rg in physics, e.g., temperature, nns renormalize/recompose manifolds emerging through learning/optimization that divide the sample space into highly semantically meaningful groups that are dictated by supervised labels (in supervised nns). |
optimal uncertainty quantification on moment class using canonical moments | we gain robustness on the quantification of a risk measurement by accounting for all sources of uncertainties tainting the inputs of a computer code. we evaluate the maximum quantile over a class of distributions defined only by constraints on their moments. the methodology is based on the theory of canonical moments that appears to be a well-suited framework for practical optimization. |
generative models for simulating mobility trajectories | mobility datasets are fundamental for evaluating algorithms pertaining to geographic information systems and facilitating experimental reproducibility. but privacy implications restrict sharing such datasets, as even aggregated location-data is vulnerable to membership inference attacks. current synthetic mobility dataset generators attempt to superficially match a priori modeled mobility characteristics which do not accurately reflect the real-world characteristics. modeling human mobility to generate synthetic yet semantically and statistically realistic trajectories is therefore crucial for publishing trajectory datasets having satisfactory utility level while preserving user privacy. specifically, long-range dependencies inherent to human mobility are challenging to capture with both discriminative and generative models. in this paper, we benchmark the performance of recurrent neural architectures (rnns), generative adversarial networks (gans) and nonparametric copulas to generate synthetic mobility traces. we evaluate the generated trajectories with respect to their geographic and semantic similarity, circadian rhythms, long-range dependencies, training and generation time. we also include two sample tests to assess statistical similarity between the observed and simulated distributions, and we analyze the privacy tradeoffs with respect to membership inference and location-sequence attacks. |
naive dictionary on musical corpora: from knowledge representation to pattern recognition | in this paper, we propose and develop the novel idea of treating musical sheets as literary documents in the traditional text analytics parlance, to fully benefit from the vast amount of research already existing in statistical text mining and topic modelling. we specifically introduce the idea of representing any given piece of music as a collection of "musical words" that we codenamed "muselets", which are essentially musical words of various lengths. given the novelty and therefore the extremely difficulty of properly forming a complete version of a dictionary of muselets, the present paper focuses on a simpler albeit naive version of the ultimate dictionary, which we refer to as a naive dictionary because of the fact that all the words are of the same length. we specifically herein construct a naive dictionary featuring a corpus made up of african american, chinese, japanese and arabic music, on which we perform both topic modelling and pattern recognition. although some of the results based on the naive dictionary are reasonably good, we anticipate phenomenal predictive performances once we get around to actually building a full scale complete version of our intended dictionary of muselets. |
computing vertex centrality measures in massive real networks with a neural learning model | vertex centrality measures are a multi-purpose analysis tool, commonly used in many application environments to retrieve information and unveil knowledge from the graphs and network structural properties. however, the algorithms of such metrics are expensive in terms of computational resources when running real-time applications or massive real world networks. thus, approximation techniques have been developed and used to compute the measures in such scenarios. in this paper, we demonstrate and analyze the use of neural network learning algorithms to tackle such task and compare their performance in terms of solution quality and computation time with other techniques from the literature. our work offers several contributions. we highlight both the pros and cons of approximating centralities though neural learning. by empirical means and statistics, we then show that the regression model generated with a feedforward neural networks trained by the levenberg-marquardt algorithm is not only the best option considering computational resources, but also achieves the best solution quality for relevant applications and large-scale networks. keywords: vertex centrality measures, neural networks, complex network models, machine learning, regression model |
optimal data driven resource allocation under multi-armed bandit observations | this paper introduces the first asymptotically optimal strategy for a multi armed bandit (mab) model under side constraints. the side constraints model situations in which bandit activations are limited by the availability of certain resources that are replenished at a constant rate. the main result involves the derivation of an asymptotic lower bound for the regret of feasible uniformly fast policies and the construction of policies that achieve this lower bound, under pertinent conditions. further, we provide the explicit form of such policies for the case in which the unknown distributions are normal with unknown means and known variances, for the case of normal distributions with unknown means and unknown variances and for the case of arbitrary discrete distributions with finite support. |
efficient allocation of law enforcement resources using predictive police patrolling | efficient allocation of scarce law enforcement resources is a hard problem to tackle. in a previous study (forthcoming barreras et.al (2019)) it has been shown that a simplified version of the self-exciting point process explained in mohler et.al (2011), performs better predicting crime in the city of bogot\'{a} - colombia, than other standard hotspot models such as plain kde or ellipses models. this paper fully implements the mohler et.al (2011) model in the city of bogot\'{a} and explains its technological deployment for the city as a tool for the efficient allocation of police resources. |
solar enablement initiative in australia: report on efficiently identifying critical cases for evaluating the voltage impact of large pv investment | the increasing quantity of pv generation connected to distribution networks is creating challenges in maintaining and controlling voltages in those distribution networks. determining the maximum hosting capacity for new pv installations based on the historical data is an essential task for distribution networks. analyzing all historical data in large distribution networks is impractical. therefore, this paper focuses on how to time efficiently identify the critical cases for evaluating the voltage impacts of the new large pv applications in medium voltage (mv) distribution networks. a systematic approach is proposed to cluster medium voltage nodes based on electrical adjacency and time blocks. mv nodes are clustered along with the voltage magnitudes and time blocks. critical cases of each cluster can be used for further power flow study. this method is scalable and can time efficiently identify cases for evaluating pv investment on medium voltage networks. |
advance prediction of ventricular tachyarrhythmias using patient metadata and multi-task networks | we describe a novel neural network architecture for the prediction of ventricular tachyarrhythmias. the model receives input features that capture the change in rr intervals and ectopic beats, along with features based on heart rate variability and frequency analysis. patient age is also included as a trainable embedding, while the whole network is optimized with multi-task objectives. each of these modifications provides a consistent improvement to the model performance, achieving 74.02% prediction accuracy and 77.22% specificity 60 seconds in advance of the episode. |
on the computational inefficiency of large batch sizes for stochastic gradient descent | increasing the mini-batch size for stochastic gradient descent offers significant opportunities to reduce wall-clock training time, but there are a variety of theoretical and systems challenges that impede the widespread success of this technique. we investigate these issues, with an emphasis on time to convergence and total computational cost, through an extensive empirical analysis of network training across several architectures and problem domains, including image classification, image segmentation, and language modeling. although it is common practice to increase the batch size in order to fully exploit available computational resources, we find a substantially more nuanced picture. our main finding is that across a wide range of network architectures and problem domains, increasing the batch size beyond a certain point yields no decrease in wall-clock time to convergence for \emph{either} train or test loss. this batch size is usually substantially below the capacity of current systems. we show that popular training strategies for large batch size optimization begin to fail before we can populate all available compute resources, and we show that the point at which these methods break down depends more on attributes like model architecture and data complexity than it does directly on the size of the dataset. |
minimax optimal additive functional estimation with discrete distribution | this paper addresses a problem of estimating an additive functional given $n$ i.i.d. samples drawn from a discrete distribution $p=(p_1,...,p_k)$ with alphabet size $k$. the additive functional is defined as $\theta(p;\phi)=\sum_{i=1}^k\phi(p_i)$ for a function $\phi$, which covers the most of the entropy-like criteria. the minimax optimal risk of this problem has been already known for some specific $\phi$, such as $\phi(p)=p^\alpha$ and $\phi(p)=-p\ln p$. however, there is no generic methodology to derive the minimax optimal risk for the additive function estimation problem. in this paper, we reveal the property of $\phi$ that characterizes the minimax optimal risk of the additive functional estimation problem; this analysis is applicable to general $\phi$. more precisely, we reveal that the minimax optimal risk of this problem is characterized by the divergence speed of the function $\phi$. |
sub-national levels and trends in contraceptive prevalence, unmet need, and demand for family planning in nigeria with survey uncertainty | ambitious global goals have been established to provide universal access to affordable modern contraceptive methods. the un's sustainable development goal 3.7.1 proposes satisfying the demand for family planning (fp) services by increasing the proportion of women of reproductive age using modern methods. to measure progress toward such goals in populous countries like nigeria, it's essential to characterize the current levels and trends of fp indicators such as unmet need and modern contraceptive prevalence rates (mcpr). moreover, the substantial heterogeneity across nigeria and scale of programmatic implementation requires a sub-national resolution of these fp indicators. however, significant challenges face estimating fp indicators sub-nationally in nigeria. in this article, we develop a robust, data-driven model to utilize all available surveys to estimate the levels and trends of fp indicators in nigerian states for all women and by age-parity demographic subgroups. we estimate that overall rates and trends of mcpr and unmet need have remained low in nigeria: the average annual rate of change for mcpr by state is 0.5% (0.4%,0.6%) from 2012-2017. unmet need by age-parity demographic groups varied significantly across nigeria; parous women express much higher rates of unmet need than nulliparous women. our hierarchical bayesian model incorporates data from a diverse set of survey instruments, accounts for survey uncertainty, leverages spatio-temporal smoothing, and produces probabilistic estimates with uncertainty intervals. our flexible modeling framework directly informs programmatic decision-making by identifying age-parity-state subgroups with large rates of unmet need, highlights conflicting trends across survey instruments, and holistically interprets direct survey estimates. |
unsupervised learning with glrm feature selection reveals novel traumatic brain injury phenotypes | baseline injury categorization is important to traumatic brain injury (tbi) research and treatment. current categorization is dominated by symptom-based scores that insufficiently capture injury heterogeneity. in this work, we apply unsupervised clustering to identify novel tbi phenotypes. our approach uses a generalized low-rank model (glrm) model for feature selection in a procedure analogous to wrapper methods. the resulting clusters reveal four novel tbi phenotypes with distinct feature profiles and that correlate to 90-day functional and cognitive status. |
bayesian sequential design based on dual objectives for accelerated life tests | traditional accelerated life test plans are typically based on optimizing the c-optimality for minimizing the variance of an interested quantile of the lifetime distribution. the traditional methods rely on some specified planning values for the model parameters, which are usually unknown prior to the actual tests. the ambiguity of the specified parameters can lead to suboptimal designs for optimizing the intended reliability performance. in this paper, we propose a sequential design strategy for life test plans based on considering dual objectives. in the early stage of the sequential experiment, we suggest to allocate more design locations based on optimizing the d-optimality to quickly gain precision in the estimated model parameters. in the later stage of the experiment, we can allocate more samples based on optimizing the c-optimality to maximize the precision of the estimated quantile of the lifetime distribution. we compare the proposed sequential design strategy with existing test plans considering only a single criterion and illustrate the new method with an example on fatigue testing of polymer composites. |
corresponding projections for orphan screening | we propose a novel transfer learning approach for orphan screening called corresponding projections. in orphan screening the learning task is to predict the binding affinities of compounds to an orphan protein, i.e., one for which no training data is available. the identification of compounds with high affinity is a central concern in medicine since it can be used for drug discovery and design. given a set of prediction models for proteins with labelled training data and a similarity between the proteins, corresponding projections constructs a model for the orphan protein from them such that the similarity between models resembles the one between proteins. under the assumption that the similarity resemblance holds, we derive an efficient algorithm for kernel methods. we empirically show that the approach outperforms the state-of-the-art in orphan screening. |
intraday forecasts of a volatility index: functional time series methods with dynamic updating | as a forward-looking measure of future equity market volatility, the vix index has gained immense popularity in recent years to become a key measure of risk for market analysts and academics. we consider discrete reported intraday vix tick values as realisations of a collection of curves observed sequentially on equally spaced and dense grids over time and utilise functional data analysis techniques to produce one-day-ahead forecasts of these curves. the proposed method facilitates the investigation of dynamic changes in the index over very short time intervals as showcased using the 15-second high-frequency vix index values. with the help of dynamic updating techniques, our point and interval forecasts are shown to enjoy improved accuracy over conventional time series models. |
deep factors with gaussian processes for forecasting | a large collection of time series poses significant challenges for classical and neural forecasting approaches. classical time series models fail to fit data well and to scale to large problems, but succeed at providing uncertainty estimates. the converse is true for deep neural networks. in this paper, we propose a hybrid model that incorporates the benefits of both approaches. our new method is data-driven and scalable via a latent, global, deep component. it also handles uncertainty through a local classical gaussian process model. our experiments demonstrate that our method obtains higher accuracy than state-of-the-art methods. |
understanding unequal gender classification accuracy from face images | recent work shows unequal performance of commercial face classification services in the gender classification task across intersectional groups defined by skin type and gender. accuracy on dark-skinned females is significantly worse than on any other group. in this paper, we conduct several analyses to try to uncover the reason for this gap. the main finding, perhaps surprisingly, is that skin type is not the driver. this conclusion is reached via stability experiments that vary an image's skin type via color-theoretic methods, namely luminance mode-shift and optimal transport. a second suspect, hair length, is also shown not to be the driver via experiments on face images cropped to exclude the hair. finally, using contrastive post-hoc explanation techniques for neural networks, we bring forth evidence suggesting that differences in lip, eye and cheek structure across ethnicity lead to the differences. further, lip and eye makeup are seen as strong predictors for a female face, which is a troubling propagation of a gender stereotype. |
kernel based method for the $k$-sample problem | in this paper we deal with the problem of testing for the equality of $k$ probability distributions defined on $(\mathcal{x},\mathcal{b})$, where $\mathcal{x}$ is a metric space and $\mathcal{b}$ is the corresponding borel $\sigma$-field. we introduce a test statistic based on reproducing kernel hilbert space embeddings and derive its asymptotic distribution under the null hypothesis. simulations show that the introduced procedure outperforms known methods. |
explore-exploit: a framework for interactive and online learning | interactive user interfaces need to continuously evolve based on the interactions that a user has (or does not have) with the system. this may require constant exploration of various options that the system may have for the user and obtaining signals of user preferences on those. however, such an exploration, especially when the set of available options itself can change frequently, can lead to sub-optimal user experiences. we present explore-exploit: a framework designed to collect and utilize user feedback in an interactive and online setting that minimizes regressions in end-user experience. this framework provides a suite of online learning operators for various tasks such as personalization ranking, candidate selection and active learning. we demonstrate how to integrate this framework with run-time services to leverage online and interactive machine learning out-of-the-box. we also present results demonstrating the efficiencies that can be achieved using the explore-exploit framework. |
simple confidence intervals for mcmc without clts | this short note argues that 95% confidence intervals for mcmc estimates can be obtained even without establishing a clt, by multiplying their widths by 2.3. |
number of connected components in a graph: estimation via counting patterns | due to the limited resources and the scale of the graphs in modern datasets, we often get to observe a sampled subgraph of a larger original graph of interest, whether it is the worldwide web that has been crawled or social connections that have been surveyed. inferring a global property of the original graph from such a sampled subgraph is of a fundamental interest. in this work, we focus on estimating the number of connected components. it is a challenging problem and, for general graphs, little is known about the connection between the observed subgraph and the number of connected components of the original graph. in order to make this connection, we propose a highly redundant and large-dimensional representation of the subgraph, which at first glance seems counter-intuitive. a subgraph is represented by the counts of patterns, known as network motifs. this representation is crucial in introducing a novel estimator for the number of connected components for general graphs, under the knowledge of the spectral gap of the original graph. the connection is made precise via the schatten $k$-norms of the graph laplacian and the spectral representation of the number of connected components. we provide a guarantee on the resulting mean squared error that characterizes the bias variance tradeoff. experiments on synthetic and real-world graphs suggest that we improve upon competing algorithms for graphs with spectral gaps bounded away from zero. |
a dynamic network and representation learningapproach for quantifying economic growth fromsatellite imagery | quantifying the improvement in human living standard, as well as the city growth in developing countries, is a challenging problem due to the lack of reliable economic data. therefore, there is a fundamental need for alternate, largely unsupervised, computational methods that can estimate the economic conditions in the developing regions. to this end, we propose a new network science- and representation learning-based approach that can quantify economic indicators and visualize the growth of various regions. more precisely, we first create a dynamic network drawn out of high-resolution nightlight satellite images. we then demonstrate that using representation learning to mine the resulting network, our proposed approach can accurately predict spatial gross economic expenditures over large regions. our method, which requires only nightlight images and limited survey data, can capture city-growth, as well as how people's living standard is changing; this can ultimately facilitate the decision makers' understanding of growth without heavily relying on expensive and time-consuming surveys. |
swishnet: a fast convolutional neural network for speech, music and noise classification and segmentation | speech, music and noise classification/segmentation is an important preprocessing step for audio processing/indexing. to this end, we propose a novel 1d convolutional neural network (cnn) - swishnet. it is a fast and lightweight architecture that operates on mfcc features which is suitable to be added to the front-end of an audio processing pipeline. we showed that the performance of our network can be improved by distilling knowledge from a 2d cnn, pretrained on imagenet. we investigated the performance of our network on the musan corpus - an openly available comprehensive collection of noise, music and speech samples, suitable for deep learning. the proposed network achieved high overall accuracy in clip (length of 0.5-2s) classification (>97% accuracy) and frame-wise segmentation (>93% accuracy) tasks with even higher accuracy (>99%) in speech/non-speech discrimination task. to verify the robustness of our model, we trained it on musan and evaluated it on a different corpus - gtzan and found good accuracy with very little fine-tuning. we also demonstrated that our model is fast on both cpu and gpu, consumes a low amount of memory and is suitable for implementation in embedded systems. |
rank projection trees for multilevel neural network interpretation | a variety of methods have been proposed for interpreting nodes in deep neural networks, which typically involve scoring nodes at lower layers with respect to their effects on the output of higher-layer nodes (where lower and higher layers are closer to the input and output layers, respectively). however, we may be interested in picking out a prioritized collection of subsets of the inputs across a range of scales according to their importance for an output node, and not simply a prioritized ranking across the inputs as singletons. such a situation may arise in biological applications, for instance, where we are interested in epistatic effects between groups of genes in determining a trait of interest. here, we outline a flexible framework which may be used to generate multiscale network interpretations, using any previously defined scoring function. we demonstrate the ability of our method to pick out biologically important genes and gene sets in the domains of cancer and psychiatric genomics. |
a nonstationary designer space-time kernel | in spatial statistics, kriging models are often designed using a stationary covariance structure; this translation-invariance produces models which have numerous favorable properties. this assumption can be limiting, though, in circumstances where the dynamics of the model have a fundamental asymmetry, such as in modeling phenomena that evolve over time from a fixed initial profile. we propose a new nonstationary kernel which is only defined over the half-line to incorporate time more naturally in the modeling process. |
stochastic training of residual networks: a differential equation viewpoint | during the last few years, significant attention has been paid to the stochastic training of artificial neural networks, which is known as an effective regularization approach that helps improve the generalization capability of trained models. in this work, the method of modified equations is applied to show that the residual network and its variants with noise injection can be regarded as weak approximations of stochastic differential equations. such observations enable us to bridge the stochastic training processes with the optimal control of backward kolmogorov's equations. this not only offers a novel perspective on the effects of regularization from the loss landscape viewpoint but also sheds light on the design of more reliable and efficient stochastic training strategies. as an example, we propose a new way to utilize bernoulli dropout within the plain residual network architecture and conduct experiments on a real-world image classification task to substantiate our theoretical findings. |
a probabilistic model of cardiac physiology and electrocardiograms | an electrocardiogram (ekg) is a common, non-invasive test that measures the electrical activity of a patient's heart. ekgs contain useful diagnostic information about patient health that may be absent from other electronic health record (ehr) data. as multi-dimensional waveforms, they could be modeled using generic machine learning tools, such as a linear factor model or a variational autoencoder. we take a different approach:~we specify a model that directly represents the underlying electrophysiology of the heart and the ekg measurement process. we apply our model to two datasets, including a sample of emergency department ekg reports with missing data. we show that our model can more accurately reconstruct missing data (measured by test reconstruction error) than a standard baseline when there is significant missing data. more broadly, this physiological representation of heart function may be useful in a variety of settings, including prediction, causal analysis, and discovery. |
measuring the stability of ehr- and ekg-based predictive models | databases of electronic health records (ehrs) are increasingly used to inform clinical decisions. machine learning methods can find patterns in ehrs that are predictive of future adverse outcomes. however, statistical models may be built upon patterns of health-seeking behavior that vary across patient subpopulations, leading to poor predictive performance when training on one patient population and predicting on another. this note proposes two tests to better measure and understand model generalization. we use these tests to compare models derived from two data sources: (i) historical medical records, and (ii) electrocardiogram (ekg) waveforms. in a predictive task, we show that ekg-based models can be more stable than ehr-based models across different patient populations. |
improving robustness of classifiers by training against live traffic | deep learning models are known to be overconfident in their predictions on out of distribution inputs. this is a challenge when a model is trained on a particular input dataset, but receives out of sample data when deployed in practice. recently, there has been work on building classifiers that are robust to out of distribution samples by adding a regularization term that maximizes the entropy of the classifier output on out of distribution data. however, given the challenge that it is not always possible to obtain out of distribution samples, the authors suggest a gan based alternative that is independent of specific knowledge of out of distribution samples. from this existing work, we also know that having access to the true out of sample distribution for regularization works significantly better than using samples from the gan. in this paper, we make the following observation: in practice, the out of distribution samples are contained in the traffic that hits a deployed classifier. however, the traffic will also contain a unknown proportion of in-distribution samples. if the entropy over of all of the traffic data were to be naively maximized, this will hurt the classifier performance on in-distribution data. to effectively leverage this traffic data, we propose an adaptive regularization technique (based on the maximum predictive probability score of a sample) which penalizes out of distribution samples more heavily than in distribution samples in the incoming traffic. this ensures that the overall performance of the classifier does not degrade on in-distribution data, while detection of out-of-distribution samples is significantly improved by leveraging the unlabeled traffic data. we show the effectiveness of our method via experiments on natural image datasets. |
building robust classifiers through generation of confident out of distribution examples | deep learning models are known to be overconfident in their predictions on out of distribution inputs. there have been several pieces of work to address this issue, including a number of approaches for building bayesian neural networks, as well as closely related work on detection of out of distribution samples. recently, there has been work on building classifiers that are robust to out of distribution samples by adding a regularization term that maximizes the entropy of the classifier output on out of distribution data. to approximate out of distribution samples (which are not known apriori), a gan was used for generation of samples at the edges of the training distribution. in this paper, we introduce an alternative gan based approach for building a robust classifier, where the idea is to use the gan to explicitly generate out of distribution samples that the classifier is confident on (low entropy), and have the classifier maximize the entropy for these samples. we showcase the effectiveness of our approach relative to state-of-the-art on hand-written characters as well as on a variety of natural image datasets. |
on compressing u-net using knowledge distillation | we study the use of knowledge distillation to compress the u-net architecture. we show that, while standard distillation is not sufficient to reliably train a compressed u-net, introducing other regularization methods, such as batch normalization and class re-weighting, in knowledge distillation significantly improves the training process. this allows us to compress a u-net by over 1000x, i.e., to 0.1% of its original number of parameters, at a negligible decrease in performance. |
a family-based graphical approach for testing hierarchically ordered families of hypotheses | in applications of clinical trials, tested hypotheses are often grouped as multiple hierarchically ordered families. to test such structured hypotheses, various gatekeeping strategies have been developed in the literature, such as series gatekeeping, parallel gatekeeping, tree-structured gatekeeping strategies, etc. however, these gatekeeping strategies are often either non-intuitive or less flexible when addressing increasingly complex logical relationships among families of hypotheses. in order to overcome the issue, in this paper, we develop a new family-based graphical approach, which can easily derive and visualize different gatekeeping strategies. in the proposed approach, a directed and weighted graph is used to represent the generated gatekeeping strategy where each node corresponds to a family of hypotheses and two simple updating rules are used for updating the critical value of each family and the transition coefficient between any two families. theoretically, we show that the proposed graphical approach strongly controls the overall familywise error rate at a pre-specified level. through some case studies and a real clinical example, we demonstrate simplicity and flexibility of the proposed approach. |
anythreat: an opportunistic knowledge discovery approach to insider threat detection | insider threat detection is getting an increased concern from academia, industry, and governments due to the growing number of malicious insider incidents. the existing approaches proposed for detecting insider threats still have a common shortcoming, which is the high number of false alarms (false positives). the challenge in these approaches is that it is essential to detect all anomalous behaviours which belong to a particular threat. to address this shortcoming, we propose an opportunistic knowledge discovery system, namely anythreat, with the aim to detect any anomalous behaviour in all malicious insider threats. we design the anythreat system with four components. (1) a feature engineering component, which constructs community data sets from the activity logs of a group of users having the same role. (2) an oversampling component, where we propose a novel oversampling technique named artificial minority oversampling and trapper removal (amotre). amotre first removes the minority (anomalous) instances that have a high resemblance with normal (majority) instances to reduce the number of false alarms, then it synthetically oversamples the minority class by shielding the border of the majority class. (3) a class decomposition component, which is introduced to cluster the instances of the majority class into subclasses to weaken the effect of the majority class without information loss. (4) a classification component, which applies a classification method on the subclasses to achieve a better separation between the majority class(es) and the minority class(es). anythreat is evaluated on synthetic data sets generated by carnegie mellon university. it detects approximately 87.5% of malicious insider threats, and achieves the minimum of false positives=3.36%. |
a new approach for large scale multiple testing with application to fdr control for graphically structured hypotheses | in many large scale multiple testing applications, the hypotheses often have a known graphical structure, such as gene ontology in gene expression data. exploiting this graphical structure in multiple testing procedures can improve power as well as aid in interpretation. however, incorporating the structure into large scale testing procedures and proving that an error rate, such as the false discovery rate (fdr), is controlled can be challenging. in this paper, we introduce a new general approach for large scale multiple testing, which can aid in developing new procedures under various settings with proven control of desired error rates. this approach is particularly useful for developing fdr controlling procedures, which is simplified as the problem of developing per-family error rate (pfer) controlling procedures. specifically, for testing hypotheses with a directed acyclic graph (dag) structure, by using the general approach, under the assumption of independence, we first develop a specific pfer controlling procedure and based on this procedure, then develop a new fdr controlling procedure, which can preserve the desired dag structure among the rejected hypotheses. through a small simulation study and a real data analysis, we illustrate nice performance of the proposed fdr controlling procedure for dag-structured hypotheses. |
explainable genetic inheritance pattern prediction | diagnosing an inherited disease often requires identifying the pattern of inheritance in a patient's family. we represent family trees with genetic patterns of inheritance using hypergraphs and latent state space models to provide explainable inheritance pattern predictions. our approach allows for exact causal inference over a patient's possible genotypes given their relatives' phenotypes. by design, inference can be examined at a low level to provide explainable predictions. furthermore, we make use of human intuition by providing a method to assign hypothetical evidence to any inherited gene alleles. our analysis supports the application of latent state space models to improve patient care in cases of rare inherited diseases where access to genetic specialists is limited. |
towards gaussian bayesian network fusion | data sets are growing in complexity thanks to the increasing facilities we have nowadays to both generate and store data. this poses many challenges to machine learning that are leading to the proposal of new methods and paradigms, in order to be able to deal with what is nowadays referred to as big data. in this paper we propose a method for the aggregation of different bayesian network structures that have been learned from separate data sets, as a first step towards mining data sets that need to be partitioned in an horizontal way, i.e. with respect to the instances, in order to be processed. considerations that should be taken into account when dealing with this situation are discussed. scalable learning of bayesian networks is slowly emerging, and our method constitutes one of the first insights into gaussian bayesian network aggregation from different sources. tested on synthetic data it obtains good results that surpass those from individual learning. future research will be focused on expanding the method and testing more diverse data sets. |
dynamic measurement scheduling for adverse event forecasting using deep rl | current clinical practice to monitor patients' health follows either regular or heuristic-based lab test (e.g. blood test) scheduling. such practice not only gives rise to redundant measurements accruing cost, but may even lead to unnecessary patient discomfort. from the computational perspective, heuristic-based test scheduling might lead to reduced accuracy of clinical forecasting models. computationally learning an optimal clinical test scheduling and measurement collection, is likely to lead to both, better predictive models and patient outcome improvement. we address the scheduling problem using deep reinforcement learning (rl) to achieve high predictive gain and low measurement cost, by scheduling fewer, but strategically timed tests. we first show that in the simulation our policy outperforms heuristic-based measurement scheduling with higher predictive gain or lower cost measured by accumulated reward. we then learn a scheduling policy for mortality forecasting in the real-world clinical dataset (mimic3), our learned policy is able to provide useful clinical insights. to our knowledge, this is the first rl application on multi-measurement scheduling problem in the clinical setting. |
cross-modulation networks for few-shot learning | a family of recent successful approaches to few-shot learning relies on learning an embedding space in which predictions are made by computing similarities between examples. this corresponds to combining information between support and query examples at a very late stage of the prediction pipeline. inspired by this observation, we hypothesize that there may be benefits to combining the information at various levels of abstraction along the pipeline. we present an architecture called cross-modulation networks which allows support and query examples to interact throughout the feature extraction process via a feature-wise modulation mechanism. we adapt the matching networks architecture to take advantage of these interactions and show encouraging initial results on miniimagenet in the 5-way, 1-shot setting, where we close the gap with state-of-the-art. |
interpretable graph convolutional neural networks for inference on noisy knowledge graphs | in this work, we provide a new formulation for graph convolutional neural networks (gcnns) for link prediction on graph data that addresses common challenges for biomedical knowledge graphs (kgs). we introduce a regularized attention mechanism to gcnns that not only improves performance on clean datasets, but also favorably accommodates noise in kgs, a pervasive issue in real-world applications. further, we explore new visualization methods for interpretable modelling and to illustrate how the learned representation can be exploited to automate dataset denoising. the results are demonstrated on a synthetic dataset, the common benchmark dataset fb15k-237, and a large biomedical knowledge graph derived from a combination of noisy and clean data sources. using these improvements, we visualize a learned model's representation of the disease cystic fibrosis and demonstrate how to interrogate a neural network to show the potential of pparg as a candidate therapeutic target for rheumatoid arthritis. |
in-silico risk analysis of personalized artificial pancreas controllers via rare-event simulation | modern treatments for type 1 diabetes (t1d) use devices known as artificial pancreata (aps), which combine an insulin pump with a continuous glucose monitor (cgm) operating in a closed-loop manner to control blood glucose levels. in practice, poor performance of aps (frequent hyper- or hypoglycemic events) is common enough at a population level that many t1d patients modify the algorithms on existing ap systems with unregulated open-source software. anecdotally, the patients in this group have shown superior outcomes compared with standard of care, yet we do not understand how safe any ap system is since adverse outcomes are rare. in this paper, we construct generative models of individual patients' physiological characteristics and eating behaviors. we then couple these models with a t1d simulator approved for pre-clinical trials by the fda. given the ability to simulate patient outcomes in-silico, we utilize techniques from rare-event simulation theory in order to efficiently quantify the performance of a device with respect to a particular patient. we show a 72,000$\times$ speedup in simulation speed over real-time and up to 2-10 times increase in the frequency which we are able to sample adverse conditions relative to standard monte carlo sampling. in practice our toolchain enables estimates of the likelihood of hypoglycemic events with approximately an order of magnitude fewer simulations. |
on variation of gradients of deep neural networks | we provide a theoretical explanation of the role of the number of nodes at each layer in deep neural networks. we prove that the largest variation of a deep neural network with relu activation function arises when the layer with the fewest nodes changes its activation pattern. an important implication is that deep neural network is a useful tool to generate functions most of whose variations are concentrated on a smaller area of the input space near the boundaries corresponding to the layer with the fewest nodes. in turn, this property makes the function more invariant to input transformation. that is, our theoretical result gives a clue about how to design the architecture of a deep neural network to increase complexity and transformation invariancy simultaneously. |
efficiency and robustness in monte carlo sampling of 3-d geophysical inversions with obsidian v0.1.2: setting up for success | the rigorous quantification of uncertainty in geophysical inversions is a challenging problem. inversions are often ill-posed and the likelihood surface may be multi-modal; properties of any single mode become inadequate uncertainty measures, and sampling methods become inefficient for irregular posteriors or high-dimensional parameter spaces. we explore the influences of different choices made by the practitioner on the efficiency and accuracy of bayesian geophysical inversion methods that rely on markov chain monte carlo sampling to assess uncertainty, using a multi-sensor inversion of the three-dimensional structure and composition of a region in the cooper basin of south australia as a case study. the inversion is performed using an updated version of the obsidian distributed inversion software. we find that the posterior for this inversion has complex local covariance structure, hindering the efficiency of adaptive sampling methods that adjust the proposal based on the chain history. within the context of a parallel-tempered markov chain monte carlo scheme for exploring high-dimensional multi-modal posteriors, a preconditioned crank-nicholson proposal outperforms more conventional forms of random walk. aspects of the problem setup, such as priors on petrophysics or on 3-d geological structure, affect the shape and separation of posterior modes, influencing sampling performance as well as the inversion results. use of uninformative priors on sensor noise can improve inversion results by enabling optimal weighting among multiple sensors even if noise levels are uncertain. efficiency could be further increased by using posterior gradient information within proposals, which obsidian does not currently support, but which could be emulated using posterior surrogates. |
gan-em: gan based em learning framework | expectation maximization (em) algorithm is to find maximum likelihood solution for models having latent variables. a typical example is gaussian mixture model (gmm) which requires gaussian assumption, however, natural images are highly non-gaussian so that gmm cannot be applied to perform clustering task on pixel space. to overcome such limitation, we propose a gan based em learning framework that can maximize the likelihood of images and estimate the latent variables with only the constraint of l-lipschitz continuity. we call this model gan-em, which is a framework for image clustering, semi-supervised classification and dimensionality reduction. in m-step, we design a novel loss function for discriminator of gan to perform maximum likelihood estimation (mle) on data with soft class label assignments. specifically, a conditional generator captures data distribution for $k$ classes, and a discriminator tells whether a sample is real or fake for each class. since our model is unsupervised, the class label of real data is regarded as latent variable, which is estimated by an additional network (e-net) in e-step. the proposed gan-em achieves state-of-the-art clustering and semi-supervised classification results on mnist, svhn and celeba, as well as comparable quality of generated images to other recently developed generative models. |
analysis on gradient propagation in batch normalized residual networks | we conduct mathematical analysis on the effect of batch normalization (bn) on gradient backpropogation in residual network training, which is believed to play a critical role in addressing the gradient vanishing/explosion problem, in this work. by analyzing the mean and variance behavior of the input and the gradient in the forward and backward passes through the bn and residual branches, respectively, we show that they work together to confine the gradient variance to a certain range across residual blocks in backpropagation. as a result, the gradient vanishing/explosion problem is avoided. we also show the relative importance of batch normalization w.r.t. the residual branches in residual networks. |
quick best action identification in linear bandit problems | in this paper, we consider a best action identification problem in the stochastic linear bandit setup with a fixed confident constraint. in the considered best action identification problem, instead of minimizing the accumulative regret as done in existing works, the learner aims to obtain an accurate estimate of the underlying parameter based on his action and reward sequences. to improve the estimation efficiency, the learner is allowed to select his action based his historical information; hence the whole procedure is designed in a sequential adaptive manner. we first show that the existing algorithms designed to minimize the accumulative regret is not a consistent estimator and hence is not a good policy for our problem. we then characterize a lower bound on the estimation error for any policy. we further design a simple policy and show that the estimation error of the designed policy achieves the same scaling order as that of the derived lower bound. |
predicting inpatient discharge prioritization with electronic health records | identifying patients who will be discharged within 24 hours can improve hospital resource management and quality of care. we studied this problem using eight years of electronic health records (ehr) data from stanford hospital. we fit models to predict 24 hour discharge across the entire inpatient population. the best performing models achieved an area under the receiver-operator characteristic curve (auroc) of 0.85 and an auprc of 0.53 on a held out test set. this model was also well calibrated. finally, we analyzed the utility of this model in a decision theoretic framework to identify regions of roc space in which using the model increases expected utility compared to the trivial always negative or always positive classifiers. |
ensemble-based implicit sampling for bayesian inverse problems with non-gaussian priors | in the paper, we develop an ensemble-based implicit sampling method for bayesian inverse problems. for bayesian inference, the iterative ensemble smoother (ies) and implicit sampling are integrated to obtain importance ensemble samples, which build an importance density. the proposed method shares a similar idea to importance sampling. ies is used to approximate mean and covariance of a posterior distribution. this provides the map point and the inverse of hessian matrix, which are necessary to construct the implicit map in implicit sampling. the importance samples are generated by the implicit map and the corresponding weights are the ratio between the importance density and posterior density. in the proposed method, we use the ensemble samples of ies to find the optimization solution of likelihood function and the inverse of hessian matrix. this approach avoids the explicit computation for jacobian matrix and hessian matrix, which are very computationally expensive in high dimension spaces. to treat non-gaussian models, discrete cosine transform and gaussian mixture model are used to characterize the non-gaussian priors. the ensemble-based implicit sampling method is extended to the non-gaussian priors for exploring the posterior of unknowns in inverse problems. the proposed method is used for each individual gaussian model in the gaussian mixture model. the proposed approach substantially improves the applicability of implicit sampling method. a few numerical examples are presented to demonstrate the efficacy of the proposed method with applications of inverse problems for subsurface flow problems and anomalous diffusion models in porous media. |
investigating performance of neural networks and gradient boosting models approximating microscopic traffic simulations in traffic optimization tasks | we analyze the accuracy of traffic simulations metamodels based on neural networks and gradient boosting models (lightgbm), applied to traffic optimization as fitness functions of genetic algorithms. our metamodels approximate outcomes of traffic simulations (the total time of waiting on a red signal) taking as an input different traffic signal settings, in order to efficiently find (sub)optimal settings. their accuracy was proven to be very good on randomly selected test sets, but it turned out that the accuracy may drop in case of settings expected (according to genetic algorithms) to be close to local optima, which makes the traffic optimization process more difficult. in this work, we investigate 16 different metamodels and 20 settings of genetic algorithms, in order to understand what are the reasons of this phenomenon, what is its scale, how it can be mitigated and what can be potentially done to design better real-time traffic optimization methods. |
feature selection based on unique relevant information for health data | feature selection, which searches for the most representative features in observed data, is critical for health data analysis. unlike feature extraction, such as pca and autoencoder based methods, feature selection preserves interpretability, meaning that the selected features provide direct information about certain health conditions (i.e., the label). thus, feature selection allows domain experts, such as clinicians, to understand the predictions made by machine learning based systems, as well as improve their own diagnostic skills. mutual information is often used as a basis for feature selection since it measures dependencies between features and labels. in this paper, we introduce a novel mutual information based feature selection (mibfs) method called suri, which boosts features with high unique relevant information. we compare suri to existing mibfs methods using 3 different classifiers on 6 publicly available healthcare data sets. the results indicate that, in addition to preserving interpretability, suri selects more relevant feature subsets which lead to higher classification performance. more importantly, we explore the dynamics of mutual information on a public low-dimensional health data set via exhaustive search. the results suggest the important role of unique relevant information in feature selection and verify the principles behind suri. |
imputation of clinical covariates in time series | missing data is a common problem in real-world settings and particularly relevant in healthcare applications where researchers use electronic health records (ehr) and results of observational studies to apply analytics methods. this issue becomes even more prominent for longitudinal data sets, where multiple instances of the same individual correspond to different observations in time. standard imputation methods do not take into account patient specific information incorporated in multivariate panel data. we introduce the novel imputation algorithm medimpute that addresses this problem, extending the flexible framework of optimpute suggested by bertsimas et al. (2018). our algorithm provides imputations for data sets with missing continuous and categorical features, and we present the formulation and implement scalable first-order methods for a $k$-nn model. we test the performance of our algorithm on longitudinal data from the framingham heart study when data are missing completely at random (mcar). we demonstrate that medimpute leads to significant improvements in both imputation accuracy and downstream model auc compared to state-of-the-art methods. |
dual objective approach using a convolutional neural network for magnetic resonance elastography | traditionally, nonlinear inversion, direct inversion, or wave estimation methods have been used for reconstructing images from mre displacement data. in this work, we propose a convolutional neural network architecture that can map mre displacement data directly into elastograms, circumventing the costly and computationally intensive classical approaches. in addition to the mean squared error reconstruction objective, we also introduce a secondary loss inspired by the mre mechanical models for training the neural network. our network is demonstrated to be effective for generating mre images that compare well with equivalents from the nonlinear inversion method. |
personalizing intervention probabilities by pooling | in many mobile health interventions, treatments should only be delivered in a particular context, for example when a user is currently stressed, walking or sedentary. even in an optimal context, concerns about user burden can restrict which treatments are sent. to diffuse the treatment delivery over times when a user is in a desired context, it is critical to predict the future number of times the context will occur. the focus of this paper is on whether personalization can improve predictions in these settings. though the variance between individuals' behavioral patterns suggest that personalization should be useful, the amount of individual-level data limits its capabilities. thus, we investigate several methods which pool data across users to overcome these deficiencies and find that pooling lowers the overall error rate relative to both personalized and batch approaches. |
improving clinical predictions through unsupervised time series representation learning | in this work, we investigate unsupervised representation learning on medical time series, which bears the promise of leveraging copious amounts of existing unlabeled data in order to eventually assist clinical decision making. by evaluating on the prediction of clinically relevant outcomes, we show that in a practical setting, unsupervised representation learning can offer clear performance benefits over end-to-end supervised architectures. we experiment with using sequence-to-sequence (seq2seq) models in two different ways, as an autoencoder and as a forecaster, and show that the best performance is achieved by a forecasting seq2seq model with an integrated attention mechanism, proposed here for the first time in the setting of unsupervised learning for medical time series. |
estimation in linear errors-in-variables models with unknown error distribution | parameter estimation in linear errors-in-variables models typically requires that the measurement error distribution be known (or estimable from replicate data). a generalized method of moments approach can be used to estimate model parameters in the absence of knowledge of the error distributions, but requires the existence of a large number of model moments. in this paper, parameter estimation based on the phase function, a normalized version of the characteristic function, is considered. this approach requires the model covariates to have asymmetric distributions, while the error distributions are symmetric. parameter estimation is then based on minimizing a distance function between the empirical phase functions of the noisy covariates and the outcome variable. no knowledge of the measurement error distribution is required to calculate this estimator. both the asymptotic and finite sample properties of the estimator are considered. the connection between the phase function approach and method of moments is also discussed. the estimation of standard errors is also considered and a modified bootstrap algorithm is proposed for fast computation. the newly proposed estimator is competitive when compared to generalized method of moments, even while making fewer model assumptions on the measurement error. finally, the proposed method is applied to a real dataset concerning the measurement of air pollution. |
using multitask learning to improve 12-lead electrocardiogram classification | we develop a multi-task convolutional neural network (cnn) to classify multiple diagnoses from 12-lead electrocardiograms (ecgs) using a dataset comprised of over 40,000 ecgs, with labels derived from cardiologist clinical interpretations. since many clinically important classes can occur in low frequencies, approaches are needed to improve performance on rare classes. we compare the performance of several single-class classifiers on rare classes to a multi-headed classifier across all available classes. we demonstrate that the addition of common classes can significantly improve cnn performance on rarer classes when compared to a model trained on the rarer class in isolation. using this method, we develop a model with high performance as measured by f1 score on multiple clinically relevant classes compared against the gold-standard cardiologist interpretation. |
knowledge-driven generative subspaces for modeling multi-view dependencies in medical data | early detection of alzheimer's disease (ad) and identification of potential risk/beneficial factors are important for planning and administering timely interventions or preventive measures. in this paper, we learn a disease model for ad that combines genotypic and phenotypic profiles, and cognitive health metrics of patients. we propose a probabilistic generative subspace that describes the correlative, complementary and domain-specific semantics of the dependencies in multi-view, multi-modality medical data. guided by domain knowledge and using the latent consensus between abstractions of multi-view data, we model the fusion as a data generating process. we show that our approach can potentially lead to i) explainable clinical predictions and ii) improved ad diagnoses. |
generalization in anti-causal learning | the ability to learn and act in novel situations is still a prerogative of animate intelligence, as current machine learning methods mostly fail when moving beyond the standard i.i.d. setting. what is the reason for this discrepancy? most machine learning tasks are anti-causal, i.e., we infer causes (labels) from effects (observations). typically, in supervised learning we build systems that try to directly invert causal mechanisms. instead, in this paper we argue that strong generalization capabilities crucially hinge on searching and validating meaningful hypotheses, requiring access to a causal model. in such a framework, we want to find a cause that leads to the observed effect. anti-causal models are used to drive this search, but a causal model is required for validation. we investigate the fundamental differences between causal and anti-causal tasks, discuss implications for topics ranging from adversarial attacks to disentangling factors of variation, and provide extensive evidence from the literature to substantiate our view. we advocate for incorporating causal models in supervised learning to shift the paradigm from inference only, to search and validation. |
modeling disease progression in longitudinal ehr data using continuous-time hidden markov models | modeling disease progression in healthcare administrative databases is complicated by the fact that patients are observed only at irregular intervals when they seek healthcare services. in a longitudinal cohort of 76,888 patients with chronic obstructive pulmonary disease (copd), we used a continuous-time hidden markov model with a generalized linear model to model healthcare utilization events. we found that the fitted model provides interpretable results suitable for summarization and hypothesis generation. |
modeling irregularly sampled clinical time series | while the volume of electronic health records (ehr) data continues to grow, it remains rare for hospital systems to capture dense physiological data streams, even in the data-rich intensive care unit setting. instead, typical ehr records consist of sparse and irregularly observed multivariate time series, which are well understood to present particularly challenging problems for machine learning methods. in this paper, we present a new deep learning architecture for addressing this problem based on the use of a semi-parametric interpolation network followed by the application of a prediction network. the interpolation network allows for information to be shared across multiple dimensions during the interpolation stage, while any standard deep learning model can be used for the prediction network. we investigate the performance of this architecture on the problems of mortality and length of stay prediction. |
large spectral density matrix estimation by thresholding | spectral density matrix estimation of multivariate time series is a classical problem in time series and signal processing. in modern neuroscience, spectral density based metrics are commonly used for analyzing functional connectivity among brain regions. in this paper, we develop a non-asymptotic theory for regularized estimation of high-dimensional spectral density matrices of gaussian and linear processes using thresholded versions of averaged periodograms. our theoretical analysis ensures that consistent estimation of spectral density matrix of a $p$-dimensional time series using $n$ samples is possible under high-dimensional regime $\log p / n \rightarrow 0$ as long as the true spectral density is approximately sparse. a key technical component of our analysis is a new concentration inequality of average periodogram around its expectation, which is of independent interest. our estimation consistency results complement existing results for shrinkage based estimators of multivariate spectral density, which require no assumption on sparsity but only ensure consistent estimation in a regime $p^2/n \rightarrow 0$. in addition, our proposed thresholding based estimators perform consistent and automatic edge selection when learning coherence networks among the components of a multivariate time series. we demonstrate the advantage of our estimators using simulation studies and a real data application on functional connectivity analysis with fmri data. |
interpretable clustering via optimal trees | state-of-the-art clustering algorithms use heuristics to partition the feature space and provide little insight into the rationale for cluster membership, limiting their interpretability. in healthcare applications, the latter poses a barrier to the adoption of these methods since medical researchers are required to provide detailed explanations of their decisions in order to gain patient trust and limit liability. we present a new unsupervised learning algorithm that leverages mixed integer optimization techniques to generate interpretable tree-based clustering models. utilizing the flexible framework of optimal trees, our method approximates the globally optimal solution leading to high quality partitions of the feature space. our algorithm, can incorporate various internal validation metrics, naturally determines the optimal number of clusters, and is able to account for mixed numeric and categorical data. it achieves comparable or superior performance on both synthetic and real world datasets when compared to k-means while offering significantly higher interpretability. |
towards theoretical understanding of large batch training in stochastic gradient descent | stochastic gradient descent (sgd) is almost ubiquitously used for training non-convex optimization tasks. recently, a hypothesis proposed by keskar et al. [2017] that large batch methods tend to converge to sharp minimizers has received increasing attention. we theoretically justify this hypothesis by providing new properties of sgd in both finite-time and asymptotic regimes. in particular, we give an explicit escaping time of sgd from a local minimum in the finite-time regime and prove that sgd tends to converge to flatter minima in the asymptotic regime (although may take exponential time to converge) regardless of the batch size. we also find that sgd with a larger ratio of learning rate to batch size tends to converge to a flat minimum faster, however, its generalization performance could be worse than the sgd with a smaller ratio of learning rate to batch size. we include numerical experiments to corroborate these theoretical findings. |
few-shot self reminder to overcome catastrophic forgetting | deep neural networks are known to suffer the catastrophic forgetting problem, where they tend to forget the knowledge from the previous tasks when sequentially learning new tasks. such failure hinders the application of deep learning based vision system in continual learning settings. in this work, we present a simple yet surprisingly effective way of preventing catastrophic forgetting. our method, called few-shot self reminder (fsr), regularizes the neural net from changing its learned behaviour by performing logit matching on selected samples kept in episodic memory from the old tasks. surprisingly, this simplistic approach only requires to retrain a small amount of data in order to outperform previous methods in knowledge retention. we demonstrate the superiority of our method to the previous ones in two different continual learning settings on popular benchmarks, as well as a new continual learning problem where tasks are designed to be more dissimilar. |
learning the progression and clinical subtypes of alzheimer's disease from longitudinal clinical data | alzheimer's disease (ad) is a degenerative brain disease impairing a person's ability to perform day to day activities. the clinical manifestations of alzheimer's disease are characterized by heterogeneity in age, disease span, progression rate, impairment of memory and cognitive abilities. due to these variabilities, personalized care and treatment planning, as well as patient counseling about their individual progression is limited. recent developments in machine learning to detect hidden patterns in complex, multi-dimensional datasets provides significant opportunities to address this critical need. in this work, we use unsupervised and supervised machine learning approaches for subtype identification and prediction. we apply machine learning methods to the extensive clinical observations available at the alzheimer's disease neuroimaging initiative (adni) data set to identify patient subtypes and to predict disease progression. our analysis depicts the progression space for the alzheimer's disease into low, moderate and high disease progression zones. the proposed work will enable early detection and characterization of distinct disease subtypes based on clinical heterogeneity. we anticipate that our models will enable patient counseling, clinical trial design, and ultimately individualized clinical care. |
semi-supervised rare disease detection using generative adversarial network | rare diseases affect a relatively small number of people, which limits investment in research for treatments and cures. developing an efficient method for rare disease detection is a crucial first step towards subsequent clinical research. in this paper, we present a semi-supervised learning framework for rare disease detection using generative adversarial networks. our method takes advantage of the large amount of unlabeled data for disease detection and achieves the best results in terms of precision-recall score compared to baseline techniques. |
modeling treatment delays for patients using feature label pairs in a time series | pharmaceutical targeting is one of key inputs for making sales and marketing strategy planning. targeting list is built on predicting physician's sales potential of certain type of patient. in this paper, we present a time-sensitive targeting framework leveraging time series model to predict patient's disease and treatment progression. we create time features by extracting service history within a certain period, and record whether the event happens in a look-forward period. such feature-label pairs are examined across all time periods and all patients to train a model. it keeps the inherent order of services and evaluates features associated to the imminent future, which contribute to improved accuracy. |
twists and turns in the us-north korea dialogue: key figure dynamic network analysis using news articles | in this paper, we present a method for analyzing a dynamic network of key figures in the u.s.-north korea relations during the first two quarters of 2018. our method constructs key figure networks from u.s. news articles on north korean issues by taking co-occurrence of people's names in an article as a domain-relevant social link. we call a group of people that co-occur repeatedly in the same domain (news articles on north korean issues in our case) "key figures" and their social networks "key figure networks." we analyze block-structure changes of key figure networks in the u.s.-north korea relations using a bayesian hidden markov multilinear tensor model. the results of our analysis show that block structure changes in the key figure network in the u.s.-north korea relations predict important game-changing moments in the u.s.-north korea relations in the first two quarters of 2018. |
split learning for health: distributed deep learning without sharing raw patient data | can health entities collaboratively train deep learning models without sharing sensitive raw data? this paper proposes several configurations of a distributed deep learning method called splitnn to facilitate such collaborations. splitnn does not share raw data or model details with collaborating institutions. the proposed configurations of splitnn cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. we compare performance and resource efficiency trade-offs of splitnn and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitnn. |
rademacher complexity and generalization performance of multi-category margin classifiers | one of the main open problems in the theory of multi-category margin classification is the form of the optimal dependency of a guaranteed risk on the number c of categories, the sample size m and the margin parameter gamma. from a practical point of view, the theoretical analysis of generalization performance contributes to the development of new learning algorithms. in this paper, we focus only on the theoretical aspect of the question posed. more precisely, under minimal learnability assumptions, we derive a new risk bound for multi-category margin classifiers. we improve the dependency on c over the state of the art when the margin loss function considered satisfies the lipschitz condition. we start with the basic supremum inequality that involves a rademacher complexity as a capacity measure. this capacity measure is then linked to the metric entropy through the chaining method. in this context, our improvement is based on the introduction of a new combinatorial metric entropy bound. |
deep learning approach for predicting 30 day readmissions after coronary artery bypass graft surgery | hospital readmissions within 30 days after discharge following coronary artery bypass graft (cabg) surgery are substantial contributors to healthcare costs. many predictive models were developed to identify risk factors for readmissions. however, majority of the existing models use statistical analysis techniques with data available at discharge. we propose an ensembled model to predict cabg readmissions using pre-discharge perioperative data and machine learning survival analysis techniques. firstly, we applied fifty one potential readmission risk variables to cox proportional hazard (cph) survival regression univariate analysis. fourteen of them turned out to be significant (with p value < 0.05), contributing to readmissions. subsequently, we applied these 14 predictors to multivariate cph model and deep learning neural network (nn) representation of the cph model, deepsurv. we validated this new ensembled model with 453 isolated adult cabg cases. nine of the fourteen perioperative risk variables were identified as the most significant with hazard ratios (hr) of greater than 1.0. the concordance index metrics for cph, deepsurv, and ensembled models were then evaluated with training and validation datasets. our ensembled model yielded promising results in terms of c-statistics, as we raised the the number of iterations and data set sizes. 30 day all-cause readmissions among isolated cabg patients can be predicted more effectively with perioperative pre-discharge data, using machine learning survival analysis techniques. prediction accuracy levels could be improved further with deep learning algorithms. |
examining deep learning architectures for crime classification and prediction | in this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. we examine the effectiveness of deep learning algorithms on this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. having as training data time-series of crime types per location, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. in our experiments with five publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them in order to achieve improved performance in crime classification and finally crime prediction. |
an improved fully nonparametric estimator of the marginal survival function based on case-control clustered data | a case-control family study is a study where individuals with a disease of interest (case probands) and individuals without the disease (control probands) are randomly sampled from a well-defined population. possibly right-censored age at onset and disease status are observed for both probands and their relatives. correlation among the outcomes within a family is induced by factors such as inherited genetic susceptibility, shared environment, and common behavior patterns. for this setting, we present a nonparametric estimator of the marginal survival function, based on local linear estimation of conditional survival functions. asymptotic theory for the estimator is provided, and simulation results are presented showing that the method performs well. the method is illustrated on data from a prostate cancer study. keywords: case-control; family study; multivariate survival; nonparametric estimator; local linear |
predicting blood pressure response to fluid bolus therapy using attention-based neural networks for clinical interpretability | determining whether hypotensive patients in intensive care units (icus) should receive fluid bolus therapy (fbt) has been an extremely challenging task for intensive care physicians as the corresponding increase in blood pressure has been hard to predict. our study utilized regression models and attention-based recurrent neural network (rnn) algorithms and a multi-clinical information system large-scale database to build models that can predict the successful response to fbt among hypotensive patients in icus. we investigated both time-aggregated modeling using logistic regression algorithms with regularization and time-series modeling using the long short term memory network (lstm) and the gated recurrent units network (gru) with the attention mechanism for clinical interpretability. among all modeling strategies, the stacked lstm with the attention mechanism yielded the most predictable model with the highest accuracy of 0.852 and area under the curve (auc) value of 0.925. the study results may help identify hypotensive patients in icus who will have sufficient blood pressure recovery after fbt. |
enhancing perceptual attributes with bayesian style generation | deep learning has brought an unprecedented progress in computer vision and significant advances have been made in predicting subjective properties inherent to visual data (e.g., memorability, aesthetic quality, evoked emotions, etc.). recently, some research works have even proposed deep learning approaches to modify images such as to appropriately alter these properties. following this research line, this paper introduces a novel deep learning framework for synthesizing images in order to enhance a predefined perceptual attribute. our approach takes as input a natural image and exploits recent models for deep style transfer and generative adversarial networks to change its style in order to modify a specific high-level attribute. differently from previous works focusing on enhancing a specific property of a visual content, we propose a general framework and demonstrate its effectiveness in two use cases, i.e. increasing image memorability and generating scary pictures. we evaluate the proposed approach on publicly available benchmarks, demonstrating its advantages over state of the art methods. |
deep inverse optimization | given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. we cast inverse optimization as a form of deep learning. our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. we demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. with this approach, inverse optimization can leverage concepts and algorithms from deep learning. |
towards solving text-based games by producing adaptive action spaces | to solve a text-based game, an agent needs to formulate valid text commands for a given context and find the ones that lead to success. recent attempts at solving text-based games with deep reinforcement learning have focused on the latter, i.e., learning to act optimally when valid actions are known in advance. in this work, we propose to tackle the first task and train a model that generates the set of all valid commands for a given context. we try three generative models on a dataset generated with textworld. the best model can generate valid commands which were unseen at training and achieve high $f_1$ score on the test set. |
thompson sampling for noncompliant bandits | thompson sampling, a bayesian method for balancing exploration and exploitation in bandit problems, has theoretical guarantees and exhibits strong empirical performance in many domains. traditional thompson sampling, however, assumes perfect compliance, where an agent's chosen action is treated as the implemented action. this article introduces a stochastic noncompliance model that relaxes this assumption. we prove that any noncompliance in a 2-armed bernoulli bandit increases existing regret bounds. with our noncompliance model, we derive thompson sampling variants that explicitly handle both observed and latent noncompliance. with extensive empirical analysis, we demonstrate that our algorithms either match or outperform traditional thompson sampling in both compliant and noncompliant environments. |
automatic lesion boundary detection in dermoscopy | this manuscript addresses the problem of the automatic lesion boundary detection in dermoscopy, using deep neural networks. an approach is based on the adaptation of the u-net convolutional neural network with skip connections for lesion boundary segmentation task. i hope this paper could serve, to some extent, as an experiment of using deep convolutional networks in biomedical segmentation task and as a guideline of the boundary detection benchmark, inspiring further attempts and researches. |
joint mapping and calibration via differentiable sensor fusion | we leverage automatic differentiation (ad) and probabilistic programming to develop an end-to-end optimization algorithm for batch triangulation of a large number of unknown objects. given noisy detections extracted from noisily geo-located street level imagery without depth information, we jointly estimate the number and location of objects of different types, together with parameters for sensor noise characteristics and prior distribution of objects conditioned on side information. the entire algorithm is framed as nested stochastic variational inference. an inner loop solves a soft data association problem via loopy belief propagation; a middle loop performs soft em clustering using a regularized newton solver (leveraging an ad framework); an outer loop backpropagates through the inner loops to train global parameters. we place priors over sensor parameters for different traffic object types, and demonstrate improvements with richer priors incorporating knowledge of the environment. we test our algorithm on detections of road signs observed by cars with mounted cameras, though in practice this technique can be used for any geo-tagged images. the detections were extracted by neural image detectors and classifiers, and we independently triangulate each type of sign (e.g. stop, traffic light). we find that our model is more robust to dnn misclassifications than current methods, generalizes across sign types, and can use geometric information to increase precision. our algorithm outperforms our current production baseline based on k-means clustering. we show that variational inference training allows generalization by learning sign-specific parameters. |
relation networks for optic disc and fovea localization in retinal images | diabetic retinopathy is the leading cause of blindness in the world. at least 90\% of new cases can be reduced with proper treatment and monitoring of the eyes. however, scanning the entire population of patients is a difficult endeavor. computer-aided diagnosis tools in retinal image analysis can make the process scalable and efficient. in this work, we focus on the problem of localizing the centers of the optic disc and fovea, a task crucial to the analysis of retinal scans. current systems recognize the optic disc and fovea individually, without exploiting their relations during learning. we propose a novel approach to localizing the centers of the optic disc and fovea by simultaneously processing them and modeling their relative geometry and appearance. we show that our approach improves localization and recognition by incorporating object-object relations efficiently, and achieves highly competitive results. |
cluster-based learning from weakly labeled bags in digital pathology | to alleviate the burden of gathering detailed expert annotations when training deep neural networks, we propose a weakly supervised learning approach to recognize metastases in microscopic images of breast lymph nodes. we describe an alternative training loss which clusters weakly labeled bags in latent space to inform relevance of patch-instances during training of a convolutional neural network. we evaluate our method on the camelyon dataset which contains high-resolution digital slides of breast lymph nodes, where labels are provided at the image-level and only subsets of patches are made available during training. |
incorporating deep features in the analysis of tissue microarray images | tissue microarray (tma) images have been used increasingly often in cancer studies and the validation of biomarkers. tacoma---a cutting-edge automatic scoring algorithm for tma images---is comparable to pathologists in terms of accuracy and repeatability. here we consider how this algorithm may be further improved. inspired by the recent success of deep learning, we propose to incorporate representations learnable through computation. we explore representations of a group nature through unsupervised learning, e.g., hierarchical clustering and recursive space partition. information carried by clustering or spatial partitioning may be more concrete than the labels when the data are heterogeneous, or could help when the labels are noisy. the use of such information could be viewed as regularization in model fitting. it is motivated by major challenges in tma image scoring---heterogeneity and label noise, and the cluster assumption in semi-supervised learning. using this information on tma images of breast cancer, we have reduced the error rate of tacoma by about 6%. further simulations on synthetic data provide insights on when such representations would likely help. although we focus on tmas, learnable representations of this type are expected to be applicable in other settings. |
generating diverse programs with instruction conditioned reinforced adversarial learning | advances in deep reinforcement learning have led to agents that perform well across a variety of sensory-motor domains. in this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction. final goals are specified to our agent via images of the scenes. a symbolic instruction consistent with the goal images is used as the conditioning input for our policies. since a single instruction corresponds to a diverse set of different but still consistent end-goal images, the agent needs to learn to generate a distribution over programs given an instruction. we demonstrate that with simple changes to the reinforced adversarial learning objective, we can learn instruction conditioned policies to achieve the corresponding diverse set of goals. most importantly, our agent's stochastic policy is shown to more accurately capture the diversity in the goal distribution than a fixed pixel-based reward function baseline. we demonstrate the efficacy of our approach on two domains: (1) drawing mnist digits with a paint software conditioned on instructions and (2) constructing scenes in a 3d editor that satisfies a certain instruction. |
accelerating large scale knowledge distillation via dynamic importance sampling | knowledge distillation is an effective technique that transfers knowledge from a large teacher model to a shallow student. however, just like massive classification, large scale knowledge distillation also imposes heavy computational costs on training models of deep neural networks, as the softmax activations at the last layer involve computing probabilities over numerous classes. in this work, we apply the idea of importance sampling which is often used in neural machine translation on large scale knowledge distillation. we present a method called dynamic importance sampling, where ranked classes are sampled from a dynamic distribution derived from the interaction between the teacher and student in full distillation. we highlight the utility of our proposal prior which helps the student capture the main information in the loss function. our approach manages to reduce the computational cost at training time while maintaining the competitive performance on cifar-100 and market-1501 person re-identification datasets. |
multi-agent deep reinforcement learning with extremely noisy observations | multi-agent reinforcement learning systems aim to provide interacting agents with the ability to collaboratively learn and adapt to the behaviour of other agents. in many real-world applications, the agents can only acquire a partial view of the world. here we consider a setting whereby most agents' observations are also extremely noisy, hence only weakly correlated to the true state of the environment. under these circumstances, learning an optimal policy becomes particularly challenging, even in the unrealistic case that an agent's policy can be made conditional upon all other agents' observations. to overcome these difficulties, we propose a multi-agent deep deterministic policy gradient algorithm enhanced by a communication medium (maddpg-m), which implements a two-level, concurrent learning mechanism. an agent's policy depends on its own private observations as well as those explicitly shared by others through a communication medium. at any given point in time, an agent must decide whether its private observations are sufficiently informative to be shared with others. however, our environments provide no explicit feedback informing an agent whether a communication action is beneficial, rather the communication policies must also be learned through experience concurrently to the main policies. our experimental results demonstrate that the algorithm performs well in six highly non-stationary environments of progressively higher complexity, and offers substantial performance gains compared to the baselines. |
generative adversarial self-imitation learning | this paper explores a simple regularizer for reinforcement learning by proposing generative adversarial self-imitation learning (gasil), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework. instead of directly maximizing rewards, gasil focuses on reproducing past good trajectories, which can potentially make long-term credit assignment easier when rewards are sparse and delayed. gasil can be easily combined with any policy gradient objective by using gasil as a learned shaped reward function. our experimental results show that gasil improves the performance of proximal policy optimization on 2d point mass and mujoco environments with delayed reward and stochastic dynamics. |