text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Generative adversarial networks (GANs) have achieved great success in image
translation and manipulation. However, high-fidelity image generation with
faithful style control remains a grand challenge in computer vision. This paper
presents a versatile image translation and manipulation framework that achieves
accurate semantic and style guidance in image generation by explicitly building
a correspondence. To handle the quadratic complexity incurred by building the
dense correspondences, we introduce a bi-level feature alignment strategy that
adopts a top-$k$ operation to rank block-wise features followed by dense
attention between block features which reduces memory cost substantially. As
the top-$k$ operation involves index swapping which precludes the gradient
propagation, we propose to approximate the non-differentiable top-$k$ operation
with a regularized earth mover's problem so that its gradient can be
effectively back-propagated. In addition, we design a novel semantic position
encoding mechanism that builds up coordinate for each individual semantic
region to preserve texture structures while building correspondences. Further,
we design a novel confidence feature injection module which mitigates mismatch
problem by fusing features adaptively according to the reliability of built
correspondences. Extensive experiments show that our method achieves superior
performance qualitatively and quantitatively as compared with the
state-of-the-art. The code is available at
\href{https://github.com/fnzhan/RABIT}{https://github.com/fnzhan/RABIT}. | [
"cs.CV"
] |
Anomaly detection is being regarded as an unsupervised learning task as
anomalies stem from adversarial or unlikely events with unknown distributions.
However, the predictive performance of purely unsupervised anomaly detection
often fails to match the required detection rates in many tasks and there
exists a need for labeled data to guide the model generation. Our first
contribution shows that classical semi-supervised approaches, originating from
a supervised classifier, are inappropriate and hardly detect new and unknown
anomalies. We argue that semi-supervised anomaly detection needs to ground on
the unsupervised learning paradigm and devise a novel algorithm that meets this
requirement. Although being intrinsically non-convex, we further show that the
optimization problem has a convex equivalent under relatively mild assumptions.
Additionally, we propose an active learning strategy to automatically filter
candidates for labeling. In an empirical study on network intrusion detection
data, we observe that the proposed learning methodology requires much less
labeled data than the state-of-the-art, while achieving higher detection
accuracies. | [
"cs.LG"
] |
The data drawn from biological, economic, and social systems are often
confounded due to the presence of unmeasured variables. Prior work in causal
discovery has focused on discrete search procedures for selecting acyclic
directed mixed graphs (ADMGs), specifically ancestral ADMGs, that encode
ordinary conditional independence constraints among the observed variables of
the system. However, confounded systems also exhibit more general equality
restrictions that cannot be represented via these graphs, placing a limit on
the kinds of structures that can be learned using ancestral ADMGs. In this
work, we derive differentiable algebraic constraints that fully characterize
the space of ancestral ADMGs, as well as more general classes of ADMGs, arid
ADMGs and bow-free ADMGs, that capture all equality restrictions on the
observed variables. We use these constraints to cast causal discovery as a
continuous optimization problem and design differentiable procedures to find
the best fitting ADMG when the data comes from a confounded linear system of
equations with correlated errors. We demonstrate the efficacy of our method
through simulations and application to a protein expression dataset. Code
implementing our methods is open-source and publicly available at
https://gitlab.com/rbhatta8/dcd and will be incorporated into the Ananke
package. | [
"cs.LG",
"stat.ML",
"G.3; J.3; F.2.2"
] |
This paper concerns dictionary learning, i.e., sparse coding, a fundamental
representation learning problem. We show that a subgradient descent algorithm,
with random initialization, can provably recover orthogonal dictionaries on a
natural nonsmooth, nonconvex $\ell_1$ minimization formulation of the problem,
under mild statistical assumptions on the data. This is in contrast to previous
provable methods that require either expensive computation or delicate
initialization schemes. Our analysis develops several tools for characterizing
landscapes of nonsmooth functions, which might be of independent interest for
provable training of deep networks with nonsmooth activations (e.g., ReLU),
among numerous other applications. Preliminary experiments corroborate our
analysis and show that our algorithm works well empirically in recovering
orthogonal dictionaries. | [
"cs.LG",
"cs.IT",
"math.IT",
"math.OC",
"stat.ML"
] |
Biometric authentication involves various technologies to identify
individuals by exploiting their unique, measurable physiological and behavioral
characteristics. However, traditional biometric authentication systems (e.g.,
face recognition, iris, retina, voice, and fingerprint) are facing an
increasing risk of being tricked by biometric tools such as anti-surveillance
masks, contact lenses, vocoder, or fingerprint films. In this paper, we design
a multimodal biometric authentication system named Deepkey, which uses both
Electroencephalography (EEG) and gait signals to better protect against such
risk. Deepkey consists of two key components: an Invalid ID Filter Model to
block unauthorized subjects and an identification model based on
attention-based Recurrent Neural Network (RNN) to identify a subject`s EEG IDs
and gait IDs in parallel. The subject can only be granted access while all the
components produce consistent evidence to match the user`s proclaimed identity.
We implement Deepkey with a live deployment in our university and conduct
extensive empirical experiments to study its technical feasibility in practice.
DeepKey achieves the False Acceptance Rate (FAR) and the False Rejection Rate
(FRR) of 0 and 1.0%, respectively. The preliminary results demonstrate that
Deepkey is feasible, show consistent superior performance compared to a set of
methods, and has the potential to be applied to the authentication deployment
in real world settings. | [
"cs.LG"
] |
The Exploration-Exploitation tradeoff arises in Reinforcement Learning when
one cannot tell if a policy is optimal. Then, there is a constant need to
explore new actions instead of exploiting past experience. In practice, it is
common to resolve the tradeoff by using a fixed exploration mechanism, such as
$\epsilon$-greedy exploration or by adding Gaussian noise, while still trying
to learn an optimal policy. In this work, we take a different approach and
study exploration-conscious criteria, that result in optimal policies with
respect to the exploration mechanism. Solving these criteria, as we establish,
amounts to solving a surrogate Markov Decision Process. We continue and analyze
properties of exploration-conscious optimal policies and characterize two
general approaches to solve such criteria. Building on the approaches, we apply
simple changes in existing tabular and deep Reinforcement Learning algorithms
and empirically demonstrate superior performance relatively to their
non-exploration-conscious counterparts, both for discrete and continuous action
spaces. | [
"cs.LG",
"stat.ML"
] |
State-of-the-art temporal action detectors to date are based on two-stream
input including RGB frames and optical flow. Although combining RGB frames and
optical flow boosts performance significantly, optical flow is a hand-designed
representation which not only requires heavy computation, but also makes it
methodologically unsatisfactory that two-stream methods are often not learned
end-to-end jointly with the flow. In this paper, we argue that optical flow is
dispensable in high-accuracy temporal action detection and image level data
augmentation (ILDA) is the key solution to avoid performance degradation when
optical flow is removed. To evaluate the effectiveness of ILDA, we design a
simple yet efficient one-stage temporal action detector based on single RGB
stream named DaoTAD. Our results show that when trained with ILDA, DaoTAD has
comparable accuracy with all existing state-of-the-art two-stream detectors
while surpassing the inference speed of previous methods by a large margin and
the inference speed is astounding 6668 fps on GeForce GTX 1080 Ti. Code is
available at \url{https://github.com/Media-Smart/vedatad}. | [
"cs.CV",
"cs.AI"
] |
Event forecasting is a challenging, yet important task, as humans seek to
constantly plan for the future. Existing automated forecasting studies rely
mostly on structured data, such as time-series or event-based knowledge graphs,
to help predict future events. In this work, we aim to formulate a task,
construct a dataset, and provide benchmarks for developing methods for event
forecasting with large volumes of unstructured text data. To simulate the
forecasting scenario on temporal news documents, we formulate the problem as a
restricted-domain, multiple-choice, question-answering (QA) task. Unlike
existing QA tasks, our task limits accessible information, and thus a model has
to make a forecasting judgement. To showcase the usefulness of this task
formulation, we introduce ForecastQA, a question-answering dataset consisting
of 10,392 event forecasting questions, which have been collected and verified
via crowdsourcing efforts. We present our experiments on ForecastQA using
BERT-based models and find that our best model achieves 60.1% accuracy on the
dataset, which still lags behind human performance by about 19%. We hope
ForecastQA will support future research efforts in bridging this gap. | [
"cs.LG",
"stat.ML"
] |
In this work we propose a new computational framework, based on generative
deep models, for synthesis of photo-realistic food meal images from textual
descriptions of its ingredients. Previous works on synthesis of images from
text typically rely on pre-trained text models to extract text features,
followed by a generative neural networks (GANs) aimed to generate realistic
images conditioned on the text features. These works mainly focus on generating
spatially compact and well-defined categories of objects, such as birds or
flowers. In contrast, meal images are significantly more complex, consisting of
multiple ingredients whose appearance and spatial qualities are further
modified by cooking methods. We propose a method that first builds an
attention-based ingredients-image association model, which is then used to
condition a generative neural network tasked with synthesizing meal images.
Furthermore, a cycle-consistent constraint is added to further improve image
quality and control appearance. Extensive experiments show our model is able to
generate meal image corresponding to the ingredients, which could be used to
augment existing dataset for solving other computational food analysis
problems. | [
"cs.CV",
"cs.GR",
"cs.LG",
"stat.ML"
] |
Conventional image retrieval techniques for Structure-from-Motion (SfM)
suffer from the limit of effectively recognizing repetitive patterns and cannot
guarantee to create just enough match pairs with high precision and high
recall. In this paper, we present a novel retrieval method based on Graph
Convolutional Network (GCN) to generate accurate pairwise matches without
costly redundancy. We formulate image retrieval task as a node binary
classification problem in graph data: a node is marked as positive if it shares
the scene overlaps with the query image. The key idea is that we find that the
local context in feature space around a query image contains rich information
about the matchable relation between this image and its neighbors. By
constructing a subgraph surrounding the query image as input data, we adopt a
learnable GCN to exploit whether nodes in the subgraph have overlapping regions
with the query photograph. Experiments demonstrate that our method performs
remarkably well on the challenging dataset of highly ambiguous and duplicated
scenes. Besides, compared with state-of-the-art matchable retrieval methods,
the proposed approach significantly reduces useless attempted matches without
sacrificing the accuracy and completeness of reconstruction. | [
"cs.CV"
] |
Despite the success of Generative Adversarial Networks (GANs), mode collapse
remains a serious issue during GAN training. To date, little work has focused
on understanding and quantifying which modes have been dropped by a model. In
this work, we visualize mode collapse at both the distribution level and the
instance level. First, we deploy a semantic segmentation network to compare the
distribution of segmented objects in the generated images with the target
distribution in the training set. Differences in statistics reveal object
classes that are omitted by a GAN. Second, given the identified omitted object
classes, we visualize the GAN's omissions directly. In particular, we compare
specific differences between individual photos and their approximate inversions
by a GAN. To this end, we relax the problem of inversion and solve the
tractable problem of inverting a GAN layer instead of the entire generator.
Finally, we use this framework to analyze several recent GANs trained on
multiple datasets and identify their typical failure cases. | [
"cs.CV",
"cs.GR",
"cs.LG",
"eess.IV"
] |
Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown
distinct advantages, e.g., solving memory-dependent tasks and meta-learning.
However, little effort has been spent on improving RNN architectures and on
understanding the underlying neural mechanisms for performance gain. In this
paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical
results show that the network can autonomously learn to abstract sub-goals and
can self-develop an action hierarchy using internal dynamics in a challenging
continuous control task. Furthermore, we show that the self-developed
compositionality of the network enhances faster re-learning when adapting to a
new task that is a re-composition of previously learned sub-goals, than when
starting from scratch. We also found that improved performance can be achieved
when neural activities are subject to stochastic rather than deterministic
dynamics. | [
"cs.LG",
"stat.ML"
] |
Many problems at the intersection of combinatorics and computer science
require solving for a permutation that optimally matches, ranks, or sorts some
data. These problems usually have a task-specific, often non-differentiable
objective function that data-driven algorithms can use as a learning signal. In
this paper, we propose the Sinkhorn Policy Gradient (SPG) algorithm for
learning policies on permutation matrices. The actor-critic neural network
architecture we introduce for SPG uniquely decouples representation learning of
the state space from the highly-structured action space of permutations with a
temperature-controlled Sinkhorn layer. The Sinkhorn layer produces continuous
relaxations of permutation matrices so that the actor-critic architecture can
be trained end-to-end. Our empirical results show that agents trained with SPG
can perform competitively on sorting, the Euclidean TSP, and matching tasks. We
also observe that SPG is significantly more data efficient at the matching task
than the baseline methods, which indicates that SPG is conducive to learning
representations that are useful for reasoning about permutations. | [
"cs.LG",
"stat.ML"
] |
One obstacle that so far prevents the introduction of machine learning models
primarily in critical areas is the lack of explainability. In this work, a
practicable approach of gaining explainability of deep artificial neural
networks (NN) using an interpretable surrogate model based on decision trees is
presented. Simply fitting a decision tree to a trained NN usually leads to
unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal
regularization during training, however, preserves the accuracy of the NN,
while it can be closely approximated by small decision trees. Tests with
different data sets confirm that L1-orthogonal regularization yields models of
lower complexity and at the same time higher fidelity compared to other
regularizers. | [
"cs.LG",
"stat.ML"
] |
Hypergraphs are a generalized data structure of graphs to model higher-order
correlations among entities, which have been successfully adopted into various
research domains. Meanwhile, HyperGraph Neural Network (HGNN) is currently the
de-facto method for hypergraph representation learning. However, HGNN aims at
single hypergraph learning and uses a pre-concatenation approach when
confronting multi-modal datasets, which leads to sub-optimal exploitation of
the inter-correlations of multi-modal hypergraphs. HGNN also suffers the
over-smoothing issue, that is, its performance drops significantly when layers
are stacked up. To resolve these issues, we propose the Residual enhanced
Multi-Hypergraph Neural Network, which can not only fuse multi-modal
information from each hypergraph effectively, but also circumvent the
over-smoothing issue associated with HGNN. We conduct experiments on two 3D
benchmarks, the NTU and the ModelNet40 datasets, and compare against multiple
state-of-the-art methods. Experimental results demonstrate that both the
residual hypergraph convolutions and the multi-fusion architecture can improve
the performance of the base model and the combined model achieves a new
state-of-the-art. Code is available at
\url{https://github.com/OneForward/ResMHGNN}. | [
"cs.CV"
] |
Deep learning has given way to a new era of machine learning, apart from
computer vision. Convolutional neural networks have been implemented in image
classification, segmentation and object detection. Despite recent advancements,
we are still in the very early stages and have yet to settle on best practices
for network architecture in terms of deep design, small in size and a short
training time. In this work, we propose a very deep neural network comprised of
16 Convolutional layers compressed with the Fire Module adapted from the
SQUEEZENET model. We also call for the addition of residual connections to help
suppress degradation. This model can be implemented on almost every neural
network model with fully incorporated residual learning. This proposed model
Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT
Places365-Standard scene dataset. In our tests, the model performed with
accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation
accuracy while also enjoying a 23.86% reduction in training time and an 88.4%
reduction in size. In our tests, this model was trained from scratch. | [
"cs.CV"
] |
Image segmentation is the process of partitioning a image into different
regions or groups based on some characteristics like color, texture, motion or
shape etc. Active contours is a popular variational method for object
segmentation in images, in which the user initializes a contour which evolves
in order to optimize an objective function designed such that the desired
object boundary is the optimal solution. Recently, imaging modalities that
produce Manifold valued images have come up, for example, DT-MRI images, vector
fields. The traditional active contour model does not work on such images. In
this paper, we generalize the active contour model to work on Manifold valued
images. As expected, our algorithm detects regions with similar Manifold values
in the image. Our algorithm also produces expected results on usual gray-scale
images, since these are nothing but trivial examples of Manifold valued images.
As another application of our general active contour model, we perform texture
segmentation on gray-scale images by first creating an appropriate Manifold
valued image. We demonstrate segmentation results for manifold valued images
and texture images. | [
"cs.CV"
] |
Multi-kernel learning (MKL) has been widely used in function approximation
tasks. The key problem of MKL is to combine kernels in a prescribed dictionary.
Inclusion of irrelevant kernels in the dictionary can deteriorate accuracy of
MKL, and increase the computational complexity. To improve the accuracy of
function approximation and reduce the computational complexity, the present
paper studies data-driven selection of kernels from the dictionary that provide
satisfactory function approximations. Specifically, based on the similarities
among kernels, the novel framework constructs and refines a graph to assist
choosing a subset of kernels. In addition, random feature approximation is
utilized to enable online implementation for sequentially obtained data.
Theoretical analysis shows that our proposed algorithms enjoy tighter
sub-linear regret bound compared with state-of-art graph-based online MKL
alternatives. Experiments on a number of real datasets also showcase the
advantages of our novel graph-aided framework. | [
"cs.LG"
] |
Recent saliency models extensively explore to incorporate multi-scale
contextual information from Convolutional Neural Networks (CNNs). Besides
direct fusion strategies, many approaches introduce message-passing to enhance
CNN features or predictions. However, the messages are mainly transmitted in
two ways, by feature-to-feature passing, and by prediction-to-prediction
passing. In this paper, we add message-passing between features and predictions
and propose a deep unified CRF saliency model . We design a novel cascade CRFs
architecture with CNN to jointly refine deep features and predictions at each
scale and progressively compute a final refined saliency map. We formulate the
CRF graphical model that involves message-passing of feature-feature,
feature-prediction, and prediction-prediction, from the coarse scale to the
finer scale, to update the features and the corresponding predictions. Also, we
formulate the mean-field updates for joint end-to-end model training with CNN
through back propagation. The proposed deep unified CRF saliency model is
evaluated over six datasets and shows highly competitive performance among the
state of the arts. | [
"cs.CV"
] |
Almost all of the current top-performing object detection networks employ
region proposals to guide the search for object instances. State-of-the-art
region proposal methods usually need several thousand proposals to get high
recall, thus hurting the detection efficiency. Although the latest Region
Proposal Network method gets promising detection accuracy with several hundred
proposals, it still struggles in small-size object detection and precise
localization (e.g., large IoU thresholds), mainly due to the coarseness of its
feature maps. In this paper, we present a deep hierarchical network, namely
HyperNet, for handling region proposal generation and object detection jointly.
Our HyperNet is primarily based on an elaborately designed Hyper Feature which
aggregates hierarchical feature maps first and then compresses them into a
uniform space. The Hyper Features well incorporate deep but highly semantic,
intermediate but really complementary, and shallow but naturally
high-resolution features of the image, thus enabling us to construct HyperNet
by sharing them both in generating proposals and detecting objects via an
end-to-end joint training strategy. For the deep VGG16 model, our method
achieves completely leading recall and state-of-the-art object detection
accuracy on PASCAL VOC 2007 and 2012 using only 100 proposals per image. It
runs with a speed of 5 fps (including all steps) on a GPU, thus having the
potential for real-time processing. | [
"cs.CV"
] |
Sensitive inferences and user re-identification are major threats to privacy
when raw sensor data from wearable or portable devices are shared with
cloud-assisted applications. To mitigate these threats, we propose mechanisms
to transform sensor data before sharing them with applications running on
users' devices. These transformations aim at eliminating patterns that can be
used for user re-identification or for inferring potentially sensitive
activities, while introducing a minor utility loss for the target application
(or task). We show that, on gesture and activity recognition tasks, we can
prevent inference of potentially sensitive activities while keeping the
reduction in recognition accuracy of non-sensitive activities to less than 5
percentage points. We also show that we can reduce the accuracy of user
re-identification and of the potential inference of gender to the level of a
random guess, while keeping the accuracy of activity recognition comparable to
that obtained on the original data. | [
"cs.LG",
"cs.HC",
"eess.SP",
"stat.ML"
] |
Point cloud semantic segmentation often requires largescale annotated
training data, but clearly, point-wise labels are too tedious to prepare. While
some recent methods propose to train a 3D network with small percentages of
point labels, we take the approach to an extreme and propose "One Thing One
Click," meaning that the annotator only needs to label one point per object. To
leverage these extremely sparse labels in network training, we design a novel
self-training approach, in which we iteratively conduct the training and label
propagation, facilitated by a graph propagation module. Also, we adopt a
relation network to generate per-category prototype and explicitly model the
similarity among graph nodes to generate pseudo labels to guide the iterative
training. Experimental results on both ScanNet-v2 and S3DIS show that our
self-training approach, with extremely-sparse annotations, outperforms all
existing weakly supervised methods for 3D semantic segmentation by a large
margin, and our results are also comparable to those of the fully supervised
counterparts. | [
"cs.CV"
] |
Designing deep networks robust to adversarial examples remains an open
problem. Likewise, recent zeroth order hard-label attacks on image
classification models have shown comparable performance to their first-order,
gradient-level alternatives. It was recently shown in the gradient-level
setting that regular adversarial examples leave the data manifold, while their
on-manifold counterparts are in fact generalization errors. In this paper, we
argue that query efficiency in the zeroth-order setting is connected to an
adversary's traversal through the data manifold. To explain this behavior, we
propose an information-theoretic argument based on a noisy manifold distance
oracle, which leaks manifold information through the adversary's gradient
estimate. Through numerical experiments of manifold-gradient mutual
information, we show this behavior acts as a function of the effective problem
dimensionality and number of training points. On real-world datasets and
multiple zeroth-order attacks using dimension-reduction, we observe the same
universal behavior to produce samples closer to the data manifold. This results
in up to two-fold decrease in the manifold distance measure, regardless of the
model robustness. Our results suggest that taking the manifold-gradient mutual
information into account can thus inform better robust model design in the
future, and avoid leakage of the sensitive data manifold. | [
"cs.LG"
] |
Bayesian interpretations of neural network have a long history, dating back
to early work in the 1990's and have recently regained attention because of
their desirable properties like uncertainty estimation, model robustness and
regularisation. We want to discuss here the application of Bayesian models to
knowledge sharing between neural networks. Knowledge sharing comes in different
facets, such as transfer learning, model distillation and shared embeddings.
All of these tasks have in common that learned "features" ought to be shared
across different networks. Theoretically rooted in the concepts of Bayesian
neural networks this work has widespread application to general deep learning. | [
"stat.ML",
"cs.LG"
] |
This paper explores the use of the Learning Automata (LA) algorithm to
compute threshold selection for image segmentation as it is a critical
preprocessing step for image analysis, pattern recognition and computer vision.
LA is a heuristic method which is able to solve complex optimization problems
with interesting results in parameter estimation. Despite other techniques
commonly seek through the parameter map, LA explores in the probability space
providing appropriate convergence properties and robustness. The segmentation
task is therefore considered as an optimization problem and the LA is used to
generate the image multi-threshold separation. In this approach, one 1D
histogram of a given image is approximated through a Gaussian mixture model
whose parameters are calculated using the LA algorithm. Each Gaussian function
approximating the histogram represents a pixel class and therefore a threshold
point. The method shows fast convergence avoiding the typical sensitivity to
initial conditions such as the Expectation Maximization (EM) algorithm or the
complex time-consuming computations commonly found in gradient methods.
Experimental results demonstrate the algorithm ability to perform automatic
multi-threshold selection and show interesting advantages as it is compared to
other algorithms solving the same task. | [
"cs.CV"
] |
Considering the inherent stochasticity and uncertainty, predicting future
video frames is exceptionally challenging. In this work, we study the problem
of video prediction by combining interpretability of stochastic state space
models and representation learning of deep neural networks. Our model builds
upon an variational encoder which transforms the input video into a latent
feature space and a Luenberger-type observer which captures the dynamic
evolution of the latent features. This enables the decomposition of videos into
static features and dynamics in an unsupervised manner. By deriving the
stability theory of the nonlinear Luenberger-type observer, the hidden states
in the feature space become insensitive with respect to the initial values,
which improves the robustness of the overall model. Furthermore, the
variational lower bound on the data log-likelihood can be derived to obtain the
tractable posterior prediction distribution based on the variational principle.
Finally, the experiments such as the Bouncing Balls dataset and the Pendulum
dataset are provided to demonstrate the proposed model outperforms concurrent
works. | [
"cs.CV"
] |
In this paper we investigate the use of model-based reinforcement learning to
assist people with Type 1 Diabetes with insulin dose decisions. The proposed
architecture consists of multiple Echo State Networks to predict blood glucose
levels combined with Model Predictive Controller for planning. Echo State
Network is a version of recurrent neural networks which allows us to learn long
term dependencies in the input of time series data in an online manner.
Additionally, we address the quantification of uncertainty for a more robust
control. Here, we used ensembles of Echo State Networks to capture model
(epistemic) uncertainty. We evaluated the approach with the FDA-approved
UVa/Padova Type 1 Diabetes simulator and compared the results against baseline
algorithms such as Basal-Bolus controller and Deep Q-learning. The results
suggest that the model-based reinforcement learning algorithm can perform
equally or better than the baseline algorithms for the majority of virtual Type
1 Diabetes person profiles tested. | [
"cs.LG"
] |
Arbitrary shape text detection is a challenging task due to the high
complexity and variety of scene texts. In this work, we propose a novel
adaptive boundary proposal network for arbitrary shape text detection, which
can learn to directly produce accurate boundary for arbitrary shape text
without any post-processing. Our method mainly consists of a boundary proposal
model and an innovative adaptive boundary deformation model. The boundary
proposal model constructed by multi-layer dilated convolutions is adopted to
produce prior information (including classification map, distance field, and
direction field) and coarse boundary proposals. The adaptive boundary
deformation model is an encoder-decoder network, in which the encoder mainly
consists of a Graph Convolutional Network (GCN) and a Recurrent Neural Network
(RNN). It aims to perform boundary deformation in an iterative way for
obtaining text instance shape guided by prior information from the boundary
proposal model. In this way, our method can directly and efficiently generate
accurate text boundaries without complex post-processing. Extensive experiments
on publicly available datasets demonstrate the state-of-the-art performance of
our method. | [
"cs.CV"
] |
Learning nonlinear dynamics from aggregate data is a challenging problem
because the full trajectory of each individual is not available, namely, the
individual observed at one time may not be observed at the next time point, or
the identity of individual is unavailable. This is in sharp contrast to
learning dynamics with full trajectory data, on which the majority of existing
methods are based. We propose a novel method using the weak form of Fokker
Planck Equation (FPE) -- a partial differential equation -- to describe the
density evolution of data in a sampled form, which is then combined with
Wasserstein generative adversarial network (WGAN) in the training process. In
such a sample-based framework we are able to learn the nonlinear dynamics from
aggregate data without explicitly solving the partial differential equation
(PDE) FPE. We demonstrate our approach in the context of a series of synthetic
and real-world data sets. | [
"cs.LG",
"math.AP",
"stat.ML"
] |
Retrieval-based place recognition is an efficient and effective solution for
enabling re-localization within a pre-built map or global data association for
Simultaneous Localization and Mapping (SLAM). The accuracy of such an approach
is heavily dependent on the quality of the extracted scene-level
representation. While end-to-end solutions, which learn a global descriptor
from input point clouds, have demonstrated promising results, such approaches
are limited in their ability to enforce desirable properties at the local
feature level. In this paper, we demonstrate that the inclusion of an
additional training signal (local consistency loss) can guide the network to
learning local features which are consistent across revisits, hence leading to
more repeatable global descriptors resulting in an overall improvement in place
recognition performance. We formulate our approach in an end-to-end trainable
architecture called LoGG3D-Net. Experiments on two large-scale public
benchmarks (KITTI and MulRan) show that our method achieves mean $F1_{max}$
scores of $0.939$ and $0.968$ on KITTI and MulRan, respectively while operating
in near real-time. | [
"cs.CV",
"cs.RO"
] |
The success of machine learning applications often needs a large quantity of
data. Recently, federated learning (FL) is attracting increasing attention due
to the demand for data privacy and security, especially in the medical field.
However, the performance of existing FL approaches often deteriorates when
there exist domain shifts among clients, and few previous works focus on
personalization in healthcare. In this article, we propose FedHealth 2, an
extension of FedHealth \cite{chen2020fedhealth} to tackle domain shifts and get
personalized models for local clients. FedHealth 2 obtains the client
similarities via a pretrained model, and then it averages all weighted models
with preserving local batch normalization. Wearable activity recognition and
COVID-19 auxiliary diagnosis experiments have evaluated that FedHealth 2 can
achieve better accuracy (10%+ improvement for activity recognition) and
personalized healthcare without compromising privacy and security. | [
"cs.LG",
"cs.AI"
] |
In this paper, we propose to learn an Unsupervised Single Object Tracker
(USOT) from scratch. We identify that three major challenges, i.e., moving
object discovery, rich temporal variation exploitation, and online update, are
the central causes of the performance bottleneck of existing unsupervised
trackers. To narrow the gap between unsupervised trackers and supervised
counterparts, we propose an effective unsupervised learning approach composed
of three stages. First, we sample sequentially moving objects with unsupervised
optical flow and dynamic programming, instead of random cropping. Second, we
train a naive Siamese tracker from scratch using single-frame pairs. Third, we
continue training the tracker with a novel cycle memory learning scheme, which
is conducted in longer temporal spans and also enables our tracker to update
online. Extensive experiments show that the proposed USOT learned from
unlabeled videos performs well over the state-of-the-art unsupervised trackers
by large margins, and on par with recent supervised deep trackers. Code is
available at https://github.com/VISION-SJTU/USOT. | [
"cs.CV"
] |
One of the key challenges of visual perception is to extract abstract models
of 3D objects and object categories from visual measurements, which are
affected by complex nuisance factors such as viewpoint, occlusion, motion, and
deformations. Starting from the recent idea of viewpoint factorization, we
propose a new approach that, given a large number of images of an object and no
other supervision, can extract a dense object-centric coordinate frame. This
coordinate frame is invariant to deformations of the images and comes with a
dense equivariant labelling neural network that can map image pixels to their
corresponding object coordinates. We demonstrate the applicability of this
method to simple articulated objects and deformable objects such as human
faces, learning embeddings from random synthetic transformations or optical
flow correspondences, all without any manual supervision. | [
"cs.CV",
"stat.ML"
] |
In this paper, we investigate the conversion of a Twitter corpus into
geo-referenced raster cells holding the probability of the associated
geographical areas of being flooded. We describe a baseline approach that
combines a density ratio function, aggregation using a spatio-temporal Gaussian
kernel function, and TFIDF textual features. The features are transformed to
probabilities using a logistic regression model. The described method is
evaluated on a corpus collected after the floods that followed Hurricane Harvey
in the Houston urban area in August-September 2017. The baseline reaches a F1
score of 68%. We highlight research directions likely to improve these initial
results. | [
"cs.LG"
] |
Data-driven graph learning models a network by determining the strength of
connections between its nodes. The data refers to a graph signal which
associates a value with each graph node. Existing graph learning methods either
use simplified models for the graph signal, or they are prohibitively expensive
in terms of computational and memory requirements. This is particularly true
when the number of nodes is high or there are temporal changes in the network.
In order to consider richer models with a reasonable computational
tractability, we introduce a graph learning method based on representation
learning on graphs. Representation learning generates an embedding for each
graph node, taking the information from neighbouring nodes into account. Our
graph learning method further modifies the embeddings to compute the graph
similarity matrix. In this work, graph learning is used to examine brain
networks for brain state identification. We infer time-varying brain graphs
from an extensive dataset of intracranial electroencephalographic (iEEG)
signals from ten patients. We then apply the graphs as input to a classifier to
distinguish seizure vs. non-seizure brain states. Using the binary
classification metric of area under the receiver operating characteristic curve
(AUC), this approach yields an average of 9.13 percent improvement when
compared to two widely used brain network modeling methods. | [
"cs.LG",
"stat.AP",
"stat.ML"
] |
Plant disease detection is a huge problem and often require professional help
to detect the disease. This research focuses on creating a deep learning model
that detects the type of disease that affected the plant from the images of the
leaves of the plants. The deep learning is done with the help of Convolutional
Neural Network by performing transfer learning. The model is created using
transfer learning and is experimented with both resnet 34 and resnet 50 to
demonstrate that discriminative learning gives better results. This method
achieved state of art results for the dataset used. The main goal is to lower
the professional help to detect the plant diseases and make this model
accessible to as many people as possible. | [
"cs.CV",
"eess.IV"
] |
Transfer learning aims to exploit pre-trained models for more efficient
follow-up training on wide range of downstream tasks and datasets, enabling
successful training also on small data. Recently, strong improvement was shown
for transfer learning and model generalization when increasing model, data and
compute budget scale in the pre-training. To compare effect of scale both in
intra- and inter-domain full and few-shot transfer, in this study we combine
for the first time large openly available medical X-Ray chest imaging datasets
to reach a dataset scale comparable to ImageNet-1k. We then conduct
pre-training and transfer to different natural or medical targets while varying
network size and source data scale and domain, being either large natural
(ImageNet-1k/21k) or large medical chest X-Ray datasets. We observe strong
improvement due to larger pre-training scale for intra-domain natural-natural
and medical-medical transfer. For inter-domain natural-medical transfer, we
find improvements due to larger pre-training scale on larger X-Ray targets in
full shot regime, while for smaller targets and for few-shot regime the
improvement is not visible. Remarkably, large networks pre-trained on very
large natural ImageNet-21k are as good or better than networks pre-trained on
largest available medical X-Ray data when performing transfer to large X-Ray
targets. We conclude that high quality models for inter-domain transfer can be
also obtained by substantially increasing scale of model and generic natural
source data, removing necessity for large domain-specific medical source data
in the pre-training. Code is available at:
\url{https://github.com/SLAMPAI/large-scale-pretraining-transfer}} | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Combinatorial optimization is frequently used in computer vision. For
instance, in applications like semantic segmentation, human pose estimation and
action recognition, programs are formulated for solving inference in
Conditional Random Fields (CRFs) to produce a structured output that is
consistent with visual features of the image. However, solving inference in
CRFs is in general intractable, and approximation methods are computationally
demanding and limited to unary, pairwise and hand-crafted forms of higher order
potentials. In this paper, we show that we can learn program heuristics, i.e.,
policies, for solving inference in higher order CRFs for the task of semantic
segmentation, using reinforcement learning. Our method solves inference tasks
efficiently without imposing any constraints on the form of the potentials. We
show compelling results on the Pascal VOC and MOTS datasets. | [
"cs.CV",
"I.4.6, I.2.6"
] |
While learning models are typically studied for inputs in the form of a fixed
dimensional feature vector, real world data is rarely found in this form. In
order to meet the basic requirement of traditional learning models, structural
data generally have to be converted into fix-length vectors in a handcrafted
manner, which is tedious and may even incur information loss. A common form of
structured data is what we term "semantic tree-structures", corresponding to
data where rich semantic information is encoded in a compositional manner, such
as those expressed in JavaScript Object Notation (JSON) and eXtensible Markup
Language (XML). For tree-structured data, several learning models have been
studied to allow for working directly on raw tree-structure data, However such
learning models are limited to either a specific tree-topology or a specific
tree-structured data format, e.g., synthetic parse trees. In this paper, we
propose a novel framework for end-to-end learning on generic semantic
tree-structured data of arbitrary topology and heterogeneous data types, such
as data expressed in JSON, XML and so on. Motivated by the works in recursive
and recurrent neural networks, we develop exemplar neural implementations of
our framework for the JSON format. We evaluate our approach on several UCI
benchmark datasets, including ablation and data-efficiency studies, and on a
toy reinforcement learning task. Experimental results suggest that our
framework yields comparable performance to use of standard models with
dedicated feature-vectors in general, and even exceeds baseline performance in
cases where compositional nature of the data is particularly important.
The source code for a JSON-based implementation of our framework along with
experiments can be downloaded at https://github.com/EndingCredits/json2vec. | [
"cs.LG",
"stat.ML"
] |
Even though it is well known that for most relevant computational problems
different algorithms may perform better on different classes of problem
instances, most researchers still focus on determining a single best
algorithmic configuration based on aggregate results such as the average. In
this paper, we propose Integer Programming based approaches to build decision
trees for the Algorithm Selection Problem. These techniques allow automate
three crucial decisions: (i) discerning the most important problem features to
determine problem classes; (ii) grouping the problems into classes and (iii)
select the best algorithm configuration for each class. To evaluate this new
approach, extensive computational experiments were executed using the linear
programming algorithms implemented in the COIN-OR Branch & Cut solver across a
comprehensive set of instances, including all MIPLIB benchmark instances. The
results exceeded our expectations. While selecting the single best parameter
setting across all instances decreased the total running time by 22%, our
approach decreased the total running time by 40% on average across 10-fold
cross validation experiments. These results indicate that our method
generalizes quite well and does not overfit. | [
"cs.LG",
"cs.DM",
"cs.DS",
"90Cxx, 90C05",
"G.2.1; G.2.3; G.4"
] |
We describe a simple pre-training approach for point clouds. It works in
three steps: 1. Mask all points occluded in a camera view; 2. Learn an
encoder-decoder model to reconstruct the occluded points; 3. Use the encoder
weights as initialisation for downstream point cloud tasks. We find that even
when we construct a single pre-training dataset (from ModelNet40), this
pre-training method improves accuracy across different datasets and encoders,
on a wide range of downstream tasks. Specifically, we show that our method
outperforms previous pre-training methods in object classification, and both
part-based and semantic segmentation tasks. We study the pre-trained features
and find that they lead to wide downstream minima, have high transformation
invariance, and have activations that are highly correlated with part labels.
Code and data are available at: https://github.com/hansen7/OcCo | [
"cs.CV",
"cs.LG"
] |
In the past few years, numerous deep learning methods have been proposed to
address the task of segmenting salient objects from RGB images. However, these
approaches depending on single modality fail to achieve the state-of-the-art
performance on widely used light field salient object detection (SOD) datasets,
which collect large-scale natural images and provide multiple modalities such
as multi-view, micro-lens images and depth maps. Most recently proposed light
field SOD methods have acquired improving detecting accuracy, yet still predict
rough objects' structures and perform slow inference speed. To this end, we
propose CMA-Net, which consists of two novel cascaded mutual attention modules
aiming at fusing the high level features from the modalities of all-in-focus
and depth. Our proposed CMA-Net outperforms 30 SOD methods (by a large margin)
on two widely applied light field benchmark datasets. Besides, the proposed
CMA-Net can run at a speed of 53 fps, thus being four times faster than the
state-of-the-art multi-modal SOD methods. Extensive quantitative and
qualitative experiments illustrate both the effectiveness and efficiency of our
CMA-Net, inspiring future development of multi-modal learning for both the
RGB-D and light field SOD. | [
"cs.CV"
] |
Batch Normalization (BN) is essential to effectively train state-of-the-art
deep Convolutional Neural Networks (CNN). It normalizes inputs to the layers
during training using the statistics of each mini-batch. In this work, we study
BN from the viewpoint of Fisher kernels. We show that assuming samples within a
mini-batch are from the same probability density function, then BN is identical
to the Fisher vector of a Gaussian distribution. That means BN can be explained
in terms of kernels that naturally emerge from the probability density function
of the underlying data distribution. However, given the rectifying
non-linearities employed in CNN architectures, distribution of inputs to the
layers show heavy tail and asymmetric characteristics. Therefore, we propose
approximating underlying data distribution not with one, but a mixture of
Gaussian densities. Deriving Fisher vector for a Gaussian Mixture Model (GMM),
reveals that BN can be improved by independently normalizing with respect to
the statistics of disentangled sub-populations. We refer to our proposed soft
piecewise version of BN as Mixture Normalization (MN). Through extensive set of
experiments on CIFAR-10 and CIFAR-100, we show that MN not only effectively
accelerates training image classification and Generative Adversarial networks,
but also reaches higher quality models. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
The quadratic computational and memory complexities of the Transformer's
attention mechanism have limited its scalability for modeling long sequences.
In this paper, we propose Luna, a linear unified nested attention mechanism
that approximates softmax attention with two nested linear attention functions,
yielding only linear (as opposed to quadratic) time and space complexity.
Specifically, with the first attention function, Luna packs the input sequence
into a sequence of fixed length. Then, the packed sequence is unpacked using
the second attention function. As compared to a more traditional attention
mechanism, Luna introduces an additional sequence with a fixed length as input
and an additional corresponding output, which allows Luna to perform attention
operation linearly, while also storing adequate contextual information. We
perform extensive evaluations on three benchmarks of sequence modeling tasks:
long-context sequence modeling, neural machine translation and masked language
modeling for large-scale pretraining. Competitive or even better experimental
results demonstrate both the effectiveness and efficiency of Luna compared to a
variety | [
"cs.LG",
"cs.CL"
] |
Deep Convolutional Neural Networks (DCNNs) have recently shown state of the
art performance in high level vision tasks, such as image classification and
object detection. This work brings together methods from DCNNs and
probabilistic graphical models for addressing the task of pixel-level
classification (also called "semantic image segmentation"). We show that
responses at the final layer of DCNNs are not sufficiently localized for
accurate object segmentation. This is due to the very invariance properties
that make DCNNs good for high level tasks. We overcome this poor localization
property of deep networks by combining the responses at the final DCNN layer
with a fully connected Conditional Random Field (CRF). Qualitatively, our
"DeepLab" system is able to localize segment boundaries at a level of accuracy
which is beyond previous methods. Quantitatively, our method sets the new
state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching
71.6% IOU accuracy in the test set. We show how these results can be obtained
efficiently: Careful network re-purposing and a novel application of the 'hole'
algorithm from the wavelet community allow dense computation of neural net
responses at 8 frames per second on a modern GPU. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Using touch devices to navigate in virtual 3D environments such as computer
assisted design (CAD) models or geographical information systems (GIS) is
inherently difficult for humans, as the 3D operations have to be performed by
the user on a 2D touch surface. This ill-posed problem is classically solved
with a fixed and handcrafted interaction protocol, which must be learned by the
user. We propose to automatically learn a new interaction protocol allowing to
map a 2D user input to 3D actions in virtual environments using reinforcement
learning (RL). A fundamental problem of RL methods is the vast amount of
interactions often required, which are difficult to come by when humans are
involved. To overcome this limitation, we make use of two collaborative agents.
The first agent models the human by learning to perform the 2D finger
trajectories. The second agent acts as the interaction protocol, interpreting
and translating to 3D operations the 2D finger trajectories from the first
agent. We restrict the learned 2D trajectories to be similar to a training set
of collected human gestures by first performing state representation learning,
prior to reinforcement learning. This state representation learning is
addressed by projecting the gestures into a latent space learned by a
variational auto encoder (VAE). | [
"cs.LG",
"cs.AI",
"cs.HC"
] |
Existing works on semantic segmentation typically consider a small number of
labels, ranging from tens to a few hundreds. With a large number of labels,
training and evaluation of such task become extremely challenging due to
correlation between labels and lack of datasets with complete annotations. We
formulate semantic segmentation as a problem of image segmentation given a
semantic concept, and propose a novel system which can potentially handle an
unlimited number of concepts, including objects, parts, stuff, and attributes.
We achieve this using a weakly and semi-supervised framework leveraging
multiple datasets with different levels of supervision. We first train a deep
neural network on a 6M stock image dataset with only image-level labels to
learn visual-semantic embedding on 18K concepts. Then, we refine and extend the
embedding network to predict an attention map, using a curated dataset with
bounding box annotations on 750 concepts. Finally, we train an attention-driven
class agnostic segmentation network using an 80-category fully annotated
dataset. We perform extensive experiments to validate that the proposed system
performs competitively to the state of the art on fully supervised concepts,
and is capable of producing accurate segmentations for weakly learned and
unseen concepts. | [
"cs.CV"
] |
We propose a robust in-time predictor for in-hospital COVID-19 patient's
probability of requiring mechanical ventilation. A challenge in the risk
prediction for COVID-19 patients lies in the great variability and irregular
sampling of patient's vitals and labs observed in the clinical setting.
Existing methods have strong limitations in handling time-dependent features'
complex dynamics, either oversimplifying temporal data with summary statistics
that lose information or over-engineering features that lead to less robust
outcomes. We propose a novel in-time risk trajectory predictive model to handle
the irregular sampling rate in the data, which follows the dynamics of risk of
performing mechanical ventilation for individual patients. The model
incorporates the Multi-task Gaussian Process using observed values to learn the
posterior joint multi-variant conditional probability and infer the missing
values on a unified time grid. The temporal imputed data is fed into a
multi-objective self-attention network for the prediction task. A novel
positional encoding layer is proposed and added to the network for producing
in-time predictions. The positional layer outputs a risk score at each
user-defined time point during the entire hospital stay of an inpatient. We
frame the prediction task into a multi-objective learning framework, and the
risk scores at all time points are optimized altogether, which adds robustness
and consistency to the risk score trajectory prediction. Our experimental
evaluation on a large database with nationwide in-hospital patients with
COVID-19 also demonstrates that it improved the state-of-the-art performance in
terms of AUC (Area Under the receiver operating characteristic Curve) and AUPRC
(Area Under the Precision-Recall Curve) performance metrics, especially at
early times after hospital admission. | [
"cs.LG",
"stat.AP"
] |
Texts from scene images typically consist of several characters and exhibit a
characteristic sequence structure. Existing methods capture the structure with
the sequence-to-sequence models by an encoder to have the visual
representations and then a decoder to translate the features into the label
sequence. In this paper, we study text recognition framework by considering the
long-term temporal dependencies in the encoder stage. We demonstrate that the
proposed Temporal Convolutional Encoder with increased sequential extents
improves the accuracy of text recognition. We also study the impact of
different attention modules in convolutional blocks for learning accurate text
representations. We conduct comparisons on seven datasets and the experiments
demonstrate the effectiveness of our proposed approach. | [
"cs.CV"
] |
Embedding static graphs in low-dimensional vector spaces plays a key role in
network analytics and inference, supporting applications like node
classification, link prediction, and graph visualization. However, many
real-world networks present dynamic behavior, including topological evolution,
feature evolution, and diffusion. Therefore, several methods for embedding
dynamic graphs have been proposed to learn network representations over time,
facing novel challenges, such as time-domain modeling, temporal features to be
captured, and the temporal granularity to be embedded. In this survey, we
overview dynamic graph embedding, discussing its fundamentals and the recent
advances developed so far. We introduce the formal definition of dynamic graph
embedding, focusing on the problem setting and introducing a novel taxonomy for
dynamic graph embedding input and output. We further explore different dynamic
behaviors that may be encompassed by embeddings, classifying by topological
evolution, feature evolution, and processes on networks. Afterward, we describe
existing techniques and propose a taxonomy for dynamic graph embedding
techniques based on algorithmic approaches, from matrix and tensor
factorization to deep learning, random walks, and temporal point processes. We
also elucidate main applications, including dynamic link prediction, anomaly
detection, and diffusion prediction, and we further state some promising
research directions in the area. | [
"cs.LG",
"cs.AI",
"37E25 (Primary) 68T30, 05C62, 58D10 (Secondary)",
"A.1; I.2.6"
] |
The field of DNA nanotechnology has made it possible to assemble, with high
yields, different structures that have actionable properties. For example,
researchers have created components that can be actuated. An exciting next step
is to combine these components into multifunctional nanorobots that could,
potentially, perform complex tasks like swimming to a target location in the
human body, detect an adverse reaction and then release a drug load to stop it.
However, as we start to assemble more complex nanorobots, the yield of the
desired nanorobot begins to decrease as the number of possible component
combinations increases. Therefore, the ultimate goal of this work is to develop
a predictive model to maximize yield. However, training predictive models
typically requires a large dataset. For the nanorobots we are interested in
assembling, this will be difficult to collect. This is because high-fidelity
data, which allows us to characterize the shape and size of individual
structures, is very time-consuming to collect, whereas low-fidelity data is
readily available but only captures bulk statistics for different processes.
Therefore, this work combines low- and high-fidelity data to train a generative
model using a two-step process. We first use a relatively small, high-fidelity
dataset to train a generative model. At run time, the model takes low-fidelity
data and uses it to approximate the high-fidelity content. We do this by
biasing the model towards samples with specific properties as measured by
low-fidelity data. In this work we bias our distribution towards a desired node
degree of a graphical model that we take as a surrogate representation of the
nanorobots that this work will ultimately focus on. We have not yet accumulated
a high-fidelity dataset of nanorobots, so we leverage the MolGAN architecture
[1] and the QM9 small molecule dataset [2-3] to demonstrate our approach. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
We describe a simple and general neural network weight compression approach,
in which the network parameters (weights and biases) are represented in a
"latent" space, amounting to a reparameterization. This space is equipped with
a learned probability model, which is used to impose an entropy penalty on the
parameter representation during training, and to compress the representation
using a simple arithmetic coder after training. Classification accuracy and
model compressibility is maximized jointly, with the bitrate--accuracy
trade-off specified by a hyperparameter. We evaluate the method on the MNIST,
CIFAR-10 and ImageNet classification benchmarks using six distinct model
architectures. Our results show that state-of-the-art model compression can be
achieved in a scalable and general way without requiring complex procedures
such as multi-stage training. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Classification of time series is a growing problem in different disciplines
due to the progressive digitalization of the world. Currently, the state of the
art in time series classification is dominated by Collective of
Transformation-Based Ensembles. This algorithm is composed of several
classifiers of diverse nature that are combined according to their results in
an internal cross validation procedure. Its high complexity prevents it from
being applied to large datasets. One Nearest Neighbours with Dynamic Time
Warping remains the base classifier in any time series classification problem,
for its simplicity and good results. Despite their good performance, they share
a weakness, which is that they are not interpretable. In the field of time
series classification, there is a tradeoff between accuracy and
interpretability. In this work, we propose a set of characteristics capable of
extracting information of the structure of the time series in order to face
time series classification problems. The use of these characteristics allows
the use of traditional classification algorithms in time series problems. The
experimental results demonstrate a statistically significant improvement in the
accuracy of the results obtained by our proposal with respect to the original
time series. Apart from the improvement in accuracy, our proposal is able to
offer interpretable results based on the set of characteristics proposed. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
Due to the lack of large-scale datasets, the prevailing approach in visual
sentiment analysis is to leverage models trained for object classification in
large datasets like ImageNet. However, objects are sentiment neutral which
hinders the expected gain of transfer learning for such tasks. In this work, we
propose to overcome this problem by learning a novel sentiment-aligned image
embedding that is better suited for subsequent visual sentiment analysis. Our
embedding leverages the intricate relation between emojis and images in
large-scale and readily available data from social media. Emojis are
language-agnostic, consistent, and carry a clear sentiment signal which make
them an excellent proxy to learn a sentiment aligned embedding. Hence, we
construct a novel dataset of 4 million images collected from Twitter with their
associated emojis. We train a deep neural model for image embedding using emoji
prediction task as a proxy. Our evaluation demonstrates that the proposed
embedding outperforms the popular object-based counterpart consistently across
several sentiment analysis benchmarks. Furthermore, without bell and whistles,
our compact, effective and simple embedding outperforms the more elaborate and
customized state-of-the-art deep models on these public benchmarks.
Additionally, we introduce a novel emoji representation based on their visual
emotional response which supports a deeper understanding of the emoji modality
and their usage on social media. | [
"cs.CV"
] |
The graph Laplacian is a standard tool in data science, machine learning, and
image processing. The corresponding matrix inherits the complex structure of
the underlying network and is in certain applications densely populated. This
makes computations, in particular matrix-vector products, with the graph
Laplacian a hard task. A typical application is the computation of a number of
its eigenvalues and eigenvectors. Standard methods become infeasible as the
number of nodes in the graph is too large. We propose the use of the fast
summation based on the nonequispaced fast Fourier transform (NFFT) to perform
the dense matrix-vector product with the graph Laplacian fast without ever
forming the whole matrix. The enormous flexibility of the NFFT algorithm allows
us to embed the accelerated multiplication into Lanczos-based eigenvalues
routines or iterative linear system solvers and even consider other than the
standard Gaussian kernels. We illustrate the feasibility of our approach on a
number of test problems from image segmentation to semi-supervised learning
based on graph-based PDEs. In particular, we compare our approach with the
Nystr\"om method. Moreover, we present and test an enhanced, hybrid version of
the Nystr\"om method, which internally uses the NFFT. | [
"cs.LG",
"math.NA",
"stat.ML",
"68R10, 05C50, 65F15, 65T50, 68T05, 62H30"
] |
Animals exhibit an innate ability to learn regularities of the world through
interaction. By performing experiments in their environment, they are able to
discern the causal factors of variation and infer how they affect the world's
dynamics. Inspired by this, we attempt to equip reinforcement learning agents
with the ability to perform experiments that facilitate a categorization of the
rolled-out trajectories, and to subsequently infer the causal factors of the
environment in a hierarchical manner. We introduce {\em causal curiosity}, a
novel intrinsic reward, and show that it allows our agents to learn optimal
sequences of actions and discover causal factors in the dynamics of the
environment. The learned behavior allows the agents to infer a binary quantized
representation for the ground-truth causal factors in every environment.
Additionally, we find that these experimental behaviors are semantically
meaningful (e.g., our agents learn to lift blocks to categorize them by
weight), and are learnt in a self-supervised manner with approximately 2.5
times less data than conventional supervised planners. We show that these
behaviors can be re-purposed and fine-tuned (e.g., from lifting to pushing or
other downstream tasks). Finally, we show that the knowledge of causal factor
representations aids zero-shot learning for more complex tasks. Visit
https://sites.google.com/usc.edu/causal-curiosity/home for website. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
Learning curves provide insight into the dependence of a learner's
generalization performance on the training set size. This important tool can be
used for model selection, to predict the effect of more training data, and to
reduce the computational complexity of model training and hyperparameter
tuning. This review recounts the origins of the term, provides a formal
definition of the learning curve, and briefly covers basics such as its
estimation. Our main contribution is a comprehensive overview of the literature
regarding the shape of learning curves. We discuss empirical and theoretical
evidence that supports well-behaved curves that often have the shape of a power
law or an exponential. We consider the learning curves of Gaussian processes,
the complex shapes they can display, and the factors influencing them. We draw
specific attention to examples of learning curves that are ill-behaved, showing
worse learning performance with more training data. To wrap up, we point out
various open problems that warrant deeper empirical and theoretical
investigation. All in all, our review underscores that learning curves are
surprisingly diverse and no universal model can be identified. | [
"cs.LG"
] |
Electronic Health Records often suffer from missing data, which poses a major
problem in clinical practice and clinical studies. A novel approach for dealing
with missing data are Generative Adversarial Nets (GANs), which have been
generating huge research interest in image generation and transformation.
Recently, researchers have attempted to apply GANs to missing data generation
and imputation for EHR data: a major challenge here is the categorical nature
of the data. State-of-the-art solutions to the GAN-based generation of
categorical data involve either reinforcement learning, or learning a
bidirectional mapping between the categorical and the real latent feature
space, so that the GANs only need to generate real-valued features. However,
these methods are designed to generate complete feature vectors instead of
imputing only the subsets of missing features. In this paper we propose a
simple and yet effective approach that is based on previous work on GANs for
data imputation. We first motivate our solution by discussing the reason why
adversarial training often fails in case of categorical features. Then we
derive a novel way to re-code the categorical features to stabilize the
adversarial training. Based on experiments on two real-world EHR data with
multiple settings, we show that our imputation approach largely improves the
prediction accuracy, compared to more traditional data imputation approaches. | [
"cs.LG"
] |
Generalization to out-of-distribution (OOD) data is a capability natural to
humans yet challenging for machines to reproduce. This is because most learning
algorithms strongly rely on the i.i.d.~assumption on source/target data, which
is often violated in practice due to domain shift. Domain generalization (DG)
aims to achieve OOD generalization by using only source data for model
learning. Since first introduced in 2011, research in DG has made great
progresses. In particular, intensive research in this topic has led to a broad
spectrum of methodologies, e.g., those based on domain alignment,
meta-learning, data augmentation, or ensemble learning, just to name a few; and
has covered various vision applications such as object recognition,
segmentation, action recognition, and person re-identification. In this paper,
for the first time a comprehensive literature review is provided to summarize
the developments in DG for computer vision over the past decade. Specifically,
we first cover the background by formally defining DG and relating it to other
research fields like domain adaptation and transfer learning. Second, we
conduct a thorough review into existing methods and present a categorization
based on their methodologies and motivations. Finally, we conclude this survey
with insights and discussions on future research directions. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
We propose the use of a proportional-derivative (PD) control based policy
learned via reinforcement learning (RL) to estimate and forecast 3D human pose
from egocentric videos. The method learns directly from unsegmented egocentric
videos and motion capture data consisting of various complex human motions
(e.g., crouching, hopping, bending, and motion transitions). We propose a
video-conditioned recurrent control technique to forecast physically-valid and
stable future motions of arbitrary length. We also introduce a value function
based fail-safe mechanism which enables our method to run as a single pass
algorithm over the video data. Experiments with both controlled and in-the-wild
data show that our approach outperforms previous art in both quantitative
metrics and visual quality of the motions, and is also robust enough to
transfer directly to real-world scenarios. Additionally, our time analysis
shows that the combined use of our pose estimation and forecasting can run at
30 FPS, making it suitable for real-time applications. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering
tool based on the mixture of multivariate normal distributions model. MBIS
supports multi-channel bias field correction based on a B-spline model. A
second methodological novelty is the inclusion of graph-cuts optimization for
the stationary anisotropic hidden Markov random field model. Along with MBIS,
we release an evaluation framework that contains three different experiments on
multi-site data. We first validate the accuracy of segmentation and the
estimated bias field for each channel. MBIS outperforms a widely used
segmentation tool in a cross-comparison evaluation. The second experiment
demonstrates the robustness of results on atlas-free segmentation of two image
sets from scan-rescan protocols on 21 healthy subjects. Multivariate
segmentation is more replicable than the monospectral counterpart on
T1-weighted images. Finally, we provide a third experiment to illustrate how
MBIS can be used in a large-scale study of tissue volume change with increasing
age in 584 healthy subjects. This last result is meaningful as multivariate
segmentation performs robustly without the need for prior knowledge | [
"cs.CV",
"62P10, 62F15"
] |
Graph representation learning nowadays becomes fundamental in analyzing
graph-structured data. Inspired by recent success of contrastive methods, in
this paper, we propose a novel framework for unsupervised graph representation
learning by leveraging a contrastive objective at the node level. Specifically,
we generate two graph views by corruption and learn node representations by
maximizing the agreement of node representations in these two views. To provide
diverse node contexts for the contrastive objective, we propose a hybrid scheme
for generating graph views on both structure and attribute levels. Besides, we
provide theoretical justification behind our motivation from two perspectives,
mutual information and the classical triplet loss. We perform empirical
experiments on both transductive and inductive learning tasks using a variety
of real-world datasets. Experimental experiments demonstrate that despite its
simplicity, our proposed method consistently outperforms existing
state-of-the-art methods by large margins. Moreover, our unsupervised method
even surpasses its supervised counterparts on transductive tasks, demonstrating
its great potential in real-world applications. | [
"cs.LG",
"stat.ML"
] |
The adoption of machine learning in health care hinges on the transparency of
the used algorithms, necessitating the need for explanation methods. However,
despite a growing literature on explaining neural networks, no consensus has
been reached on how to evaluate those explanation methods. We propose IROF, a
new approach to evaluating explanation methods that circumvents the need for
manual evaluation. Compared to other recent work, our approach requires several
orders of magnitude less computational resources and no human input, making it
accessible to lower resource groups and robust to human bias. | [
"cs.CV"
] |
In order to keep track of the operational state of power grid, the world's
largest sensor systems, smart grid, was built by deploying hundreds of millions
of smart meters. Such system makes it possible to discover and make quick
response to any hidden threat to the entire power grid. Non-technical losses
(NTLs) have always been a major concern for its consequent security risks as
well as immeasurable revenue loss. However, various causes of NTL may have
different characteristics reflected in the data. Accurately capturing these
anomalies faced with such large scale of collected data records is rather
tricky as a result. In this paper, we proposed a new methodology of detecting
abnormal electricity consumptions. We did a transformation of the collected
time-series data which turns it into an image representation that could well
reflect users' relatively long term consumption behaviors. Inspired by the
excellent neural network architecture used for objective detection in computer
vision domain, we designed our deep learning model that takes the transformed
images as input and yields joint featured inferred from the multiple aspects
the input provides. Considering the limited labeled samples, especially the
abnormal ones, we used our model in a semi-supervised fashion that is brought
out in recent years. The model is tested on samples which are verified by
on-field inspections and our method showed significant improvement. | [
"cs.LG",
"stat.ML"
] |
Convolutional networks have marked their place over the last few years as the
best performing model for various visual tasks. They are, however, most suited
for supervised learning from large amounts of labeled data. Previous attempts
have been made to use unlabeled data to improve model performance by applying
unsupervised techniques. These attempts require different architectures and
training methods. In this work we present a novel approach for unsupervised
training of Convolutional networks that is based on contrasting between spatial
regions within images. This criterion can be employed within conventional
neural networks and trained using standard techniques such as SGD and
back-propagation, thus complementing supervised methods. | [
"stat.ML",
"cs.LG"
] |
Accurate real-time traffic forecasting is a core technological problem
against the implementation of the intelligent transportation system. However,
it remains challenging considering the complex spatial and temporal
dependencies among traffic flows. In the spatial dimension, due to the
connectivity of the road network, the traffic flows between linked roads are
closely related. In terms of the temporal factor, although there exists a
tendency among adjacent time points in general, the importance of distant past
points is not necessarily smaller than that of recent past points since traffic
flows are also affected by external factors. In this study, an attention
temporal graph convolutional network (A3T-GCN) traffic forecasting method was
proposed to simultaneously capture global temporal dynamics and spatial
correlations. The A3T-GCN model learns the short-time trend in time series by
using the gated recurrent units and learns the spatial dependence based on the
topology of the road network through the graph convolutional network. Moreover,
the attention mechanism was introduced to adjust the importance of different
time points and assemble global temporal information to improve prediction
accuracy. Experimental results in real-world datasets demonstrate the
effectiveness and robustness of proposed A3T-GCN. The source code can be
visited at https://github.com/lehaifeng/T-GCN/A3T. | [
"cs.LG",
"stat.ML"
] |
In inductive transfer learning, fine-tuning pre-trained convolutional
networks substantially outperforms training from scratch. When using
fine-tuning, the underlying assumption is that the pre-trained model extracts
generic features, which are at least partially relevant for solving the target
task, but would be difficult to extract from the limited amount of data
available on the target task. However, besides the initialization with the
pre-trained model and the early stopping, there is no mechanism in fine-tuning
for retaining the features learned on the source task. In this paper, we
investigate several regularization schemes that explicitly promote the
similarity of the final solution with the initial model. We show the benefit of
having an explicit inductive bias towards the initial model, and we eventually
recommend a simple $L^2$ penalty with the pre-trained model being a reference
as the baseline of penalty for transfer learning tasks. | [
"cs.LG"
] |
Image demosaicing - one of the most important early stages in digital camera
pipelines - addressed the problem of reconstructing a full-resolution image
from so-called color-filter-arrays. Despite tremendous progress made in the
pase decade, a fundamental issue that remains to be addressed is how to assure
the visual quality of reconstructed images especially in the presence of noise
corruption. Inspired by recent advances in generative adversarial networks
(GAN), we present a novel deep learning approach toward joint demosaicing and
denoising (JDD) with perceptual optimization in order to ensure the visual
quality of reconstructed images. The key contributions of this work include: 1)
we have developed a GAN-based approach toward image demosacing in which a
discriminator network with both perceptual and adversarial loss functions are
used for quality assurance; 2) we propose to optimize the perceptual quality of
reconstructed images by the proposed GAN in an end-to-end manner. Such
end-to-end optimization of GAN is particularly effective for jointly exploiting
the gain brought by each modular component (e.g., residue learning in the
generative network and perceptual loss in the discriminator network). Our
extensive experimental results have shown convincingly improved performance
over existing state-of-the-art methods in terms of both subjective and
objective quality metrics with a comparable computational cost. | [
"cs.CV"
] |
Networks have been widely used to represent the relations between objects
such as academic networks and social networks, and learning embedding for
networks has thus garnered plenty of research attention. Self-supervised
network representation learning aims at extracting node embedding without
external supervision. Recently, maximizing the mutual information between the
local node embedding and the global summary (e.g. Deep Graph Infomax, or DGI
for short) has shown promising results on many downstream tasks such as node
classification. However, there are two major limitations of DGI. Firstly, DGI
merely considers the extrinsic supervision signal (i.e., the mutual information
between node embedding and global summary) while ignores the intrinsic signal
(i.e., the mutual dependence between node embedding and node attributes).
Secondly, nodes in a real-world network are usually connected by multiple edges
with different relations, while DGI does not fully explore the various
relations among nodes. To address the above-mentioned problems, we propose a
novel framework, called High-order Deep Multiplex Infomax (HDMI), for learning
node embedding on multiplex networks in a self-supervised way. To be more
specific, we first design a joint supervision signal containing both extrinsic
and intrinsic mutual information by high-order mutual information, and we
propose a High-order Deep Infomax (HDI) to optimize the proposed supervision
signal. Then we propose an attention based fusion module to combine node
embedding from different layers of the multiplex network. Finally, we evaluate
the proposed HDMI on various downstream tasks such as unsupervised clustering
and supervised classification. The experimental results show that HDMI achieves
state-of-the-art performance on these tasks. | [
"cs.LG",
"cs.IT",
"cs.SI",
"math.IT"
] |
Graph neural networks (GNNs) have been popularly used in analyzing
graph-structured data, showing promising results in various applications such
as node classification, link prediction and network recommendation. In this
paper, we present a new graph attention neural network, namely GIPA, for
attributed graph data learning. GIPA consists of three key components:
attention, feature propagation and aggregation. Specifically, the attention
component introduces a new multi-layer perceptron based multi-head to generate
better non-linear feature mapping and representation than conventional
implementations such as dot-product. The propagation component considers not
only node features but also edge features, which differs from existing GNNs
that merely consider node features. The aggregation component uses a residual
connection to generate the final embedding. We evaluate the performance of GIPA
using the Open Graph Benchmark proteins (ogbn-proteins for short) dataset. The
experimental results reveal that GIPA can beat the state-of-the-art models in
terms of prediction accuracy, e.g., GIPA achieves an average test ROC-AUC of
$0.8700\pm 0.0010$ and outperforms all the previous methods listed in the
ogbn-proteins leaderboard. | [
"cs.LG"
] |
Group re-identification (G-ReID) is an important yet less-studied task. Its
challenges not only lie in appearance changes of individuals which have been
well-investigated in general person re-identification (ReID), but also derive
from group layout and membership changes. So the key task of G-ReID is to learn
representations robust to such changes. To address this issue, we propose a
Transferred Single and Couple Representation Learning Network (TSCN). Its
merits are two aspects: 1) Due to the lack of labelled training samples,
existing G-ReID methods mainly rely on unsatisfactory hand-crafted features. To
gain the superiority of deep learning models, we treat a group as multiple
persons and transfer the domain of a labeled ReID dataset to a G-ReID target
dataset style to learn single representations. 2) Taking into account the
neighborhood relationship in a group, we further propose learning a novel
couple representation between two group members, that achieves more
discriminative power in G-ReID tasks. In addition, an unsupervised weight
learning method is exploited to adaptively fuse the results of different views
together according to result patterns. Extensive experimental results
demonstrate the effectiveness of our approach that significantly outperforms
state-of-the-art methods by 11.7\% CMC-1 on the Road Group dataset and by
39.0\% CMC-1 on the DukeMCMT dataset. | [
"cs.CV",
"cs.MM"
] |
We consider distributions arising from a mixture of causal models, where each
model is represented by a directed acyclic graph (DAG). We provide a graphical
representation of such mixture distributions and prove that this representation
encodes the conditional independence relations of the mixture distribution. We
then consider the problem of structure learning based on samples from such
distributions. Since the mixing variable is latent, we consider causal
structure discovery algorithms such as FCI that can deal with latent variables.
We show that such algorithms recover a "union" of the component DAGs and can
identify variables whose conditional distribution across the component DAGs
vary. We demonstrate our results on synthetic and real data showing that the
inferred graph identifies nodes that vary between the different mixture
components. As an immediate application, we demonstrate how retrieval of this
causal information can be used to cluster samples according to each mixture
component. | [
"stat.ML",
"cs.LG"
] |
In this paper, we propose a neuro-symbolic framework called weighted Signal
Temporal Logic Neural Network (wSTL-NN) that combines the characteristics of
neural networks and temporal logics. Weighted Signal Temporal Logic (wSTL)
formulas are recursively composed of subformulas that are combined using
logical and temporal operators. The quantitative semantics of wSTL is defined
such that the quantitative satisfaction of subformulas with higher weights has
more influence on the quantitative satisfaction of the overall wSTL formula. In
the wSTL-NN, each neuron corresponds to a wSTL subformula, and its output
corresponds to the quantitative satisfaction of the formula. We use wSTL-NN to
represent wSTL formulas as features to classify time series data. STL features
are more explainable than those used in classical methods. The wSTL-NN is
end-to-end differentiable, which allows learning of wSTL formulas to be done
using back-propagation. To reduce the number of weights, we introduce two
techniques to sparsify the wSTL-NN.We apply our framework to an occupancy
detection time-series dataset to learn a classifier that predicts the occupancy
status of an office room. | [
"cs.LG",
"cs.NE"
] |
Every year physicians face an increasing demand of image-based diagnosis from
patients, a problem that can be addressed with recent artificial intelligence
methods. In this context, we survey works in the area of automatic report
generation from medical images, with emphasis on methods using deep neural
networks, with respect to: (1) Datasets, (2) Architecture Design, (3)
Explainability and (4) Evaluation Metrics. Our survey identifies interesting
developments, but also remaining challenges. Among them, the current evaluation
of generated reports is especially weak, since it mostly relies on traditional
Natural Language Processing (NLP) metrics, which do not accurately capture
medical correctness. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
] |
Resolution of the complex problem of image retrieval for diagram images has
yet to be reached. Deep learning methods continue to excel in the fields of
object detection and image classification applied to natural imagery. However,
the application of such methodologies applied to binary imagery remains limited
due to lack of crucial features such as textures,color and intensity
information. This paper presents a deep learning based method for image-based
search for binary patent images by taking advantage of existing large natural
image repositories for image search and sketch-based methods (Sketches are not
identical to diagrams, but they do share some characteristics; for example,
both imagery types are gray scale (binary), composed of contours, and are
lacking in texture).
We begin by using deep learning to generate sketches from natural images for
image retrieval and then train a second deep learning model on the sketches. We
then use our small set of manually labeled patent diagram images via transfer
learning to adapt the image search from sketches of natural images to diagrams.
Our experiment results show the effectiveness of deep learning with transfer
learning for detecting near-identical copies in patent images and querying
similar images based on content. | [
"cs.CV"
] |
Feature representations from pre-trained deep neural networks have been known
to exhibit excellent generalization and utility across a variety of related
tasks. Fine-tuning is by far the simplest and most widely used approach that
seeks to exploit and adapt these feature representations to novel tasks with
limited data. Despite the effectiveness of fine-tuning, itis often sub-optimal
and requires very careful optimization to prevent severe over-fitting to small
datasets. The problem of sub-optimality and over-fitting, is due in part to the
large number of parameters used in a typical deep convolutional neural network.
To address these problems, we propose a simple yet effective regularization
method for fine-tuning pre-trained deep networks for the task of k-shot
learning. To prevent overfitting, our key strategy is to cluster the model
parameters while ensuring intra-cluster similarity and inter-cluster diversity
of the parameters, effectively regularizing the dimensionality of the parameter
search space. In particular, we identify groups of neurons within each layer of
a deep network that shares similar activation patterns. When the network is to
be fine-tuned for a classification task using only k examples, we propagate a
single gradient to all of the neuron parameters that belong to the same group.
The grouping of neurons is non-trivial as neuron activations depend on the
distribution of the input data. To efficiently search for optimal groupings
conditioned on the input data, we propose a reinforcement learning search
strategy using recurrent networks to learn the optimal group assignments for
each network layer. Experimental results show that our method can be easily
applied to several popular convolutional neural networks and improve upon other
state-of-the-art fine-tuning based k-shot learning strategies by more than10% | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
The development of practical applications, such as autonomous driving and
robotics, has brought increasing attention to 3D point cloud understanding.
While deep learning has achieved remarkable success on image-based tasks, there
are many unique challenges faced by deep neural networks in processing massive,
unstructured and noisy 3D points. To demonstrate the latest progress of deep
learning for 3D point cloud understanding, this paper summarizes recent
remarkable research contributions in this area from several different
directions (classification, segmentation, detection, tracking, flow estimation,
registration, augmentation and completion), together with commonly used
datasets, metrics and state-of-the-art performances. More information regarding
this survey can be found at:
https://github.com/SHI-Labs/3D-Point-Cloud-Learning. | [
"cs.CV",
"cs.LG"
] |
The rapid development and wide utilization of object detection techniques
have aroused attention on both accuracy and speed of object detectors. However,
the current state-of-the-art object detection works are either
accuracy-oriented using a large model but leading to high latency or
speed-oriented using a lightweight model but sacrificing accuracy. In this
work, we propose YOLObile framework, a real-time object detection on mobile
devices via compression-compilation co-design. A novel block-punched pruning
scheme is proposed for any kernel size. To improve computational efficiency on
mobile devices, a GPU-CPU collaborative scheme is adopted along with advanced
compiler-assisted optimizations. Experimental results indicate that our pruning
scheme achieves 14$\times$ compression rate of YOLOv4 with 49.0 mAP. Under our
YOLObile framework, we achieve 17 FPS inference speed using GPU on Samsung
Galaxy S20. By incorporating our proposed GPU-CPU collaborative scheme, the
inference speed is increased to 19.1 FPS, and outperforms the original YOLOv4
by 5$\times$ speedup. Source code is at:
\url{https://github.com/nightsnack/YOLObile}. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Registration is a fundamental task in medical robotics and is often a crucial
step for many downstream tasks such as motion analysis, intra-operative
tracking and image segmentation. Popular registration methods such as ANTs and
NiftyReg optimize objective functions for each pair of images from scratch,
which are time-consuming for 3D and sequential images with complex
deformations. Recently, deep learning-based registration approaches such as
VoxelMorph have been emerging and achieve competitive performance. In this
work, we construct a test-time training for deep deformable image registration
to improve the generalization ability of conventional learning-based
registration model. We design multi-scale deep networks to consecutively model
the residual deformations, which is effective for high variational
deformations. Extensive experiments validate the effectiveness of multi-scale
deep registration with test-time training based on Dice coefficient for image
segmentation and mean square error (MSE), normalized local cross-correlation
(NLCC) for tissue dense tracking tasks. Two videos are in
https://www.youtube.com/watch?v=NvLrCaqCiAE and
https://www.youtube.com/watch?v=pEA6ZmtTNuQ | [
"cs.CV",
"cs.LG",
"cs.NE",
"cs.RO",
"eess.IV"
] |
Due to the lack of enough generalization in the state-space, common methods
in Reinforcement Learning (RL) suffer from slow learning speed especially in
the early learning trials. This paper introduces a model-based method in
discrete state-spaces for increasing learning speed in terms of required
experience (but not required computational time) by exploiting generalization
in the experiences of the subspaces. A subspace is formed by choosing a subset
of features in the original state representation (full-space). Generalization
and faster learning in a subspace are due to many-to-one mapping of experiences
from the full-space to each state in the subspace. Nevertheless, due to
inherent perceptual aliasing in the subspaces, the policy suggested by each
subspace does not generally converge to the optimal policy. Our approach,
called Model Based Learning with Subspaces (MoBLeS), calculates confidence
intervals of the estimated Q-values in the full-space and in the subspaces.
These confidence intervals are used in the decision making, such that the agent
benefits the most from the possible generalization while avoiding from
detriment of the perceptual aliasing in the subspaces. Convergence of MoBLeS to
the optimal policy is theoretically investigated. Additionally, we show through
several experiments that MoBLeS improves the learning speed in the early
trials. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
Electronic Health Records (EHRs) provide vital contextual information to
radiologists and other physicians when making a diagnosis. Unfortunately,
because a given patient's record may contain hundreds of notes and reports,
identifying relevant information within these in the short time typically
allotted to a case is very difficult. We propose and evaluate models that
extract relevant text snippets from patient records to provide a rough case
summary intended to aid physicians considering one or more diagnoses. This is
hard because direct supervision (i.e., physician annotations of snippets
relevant to specific diagnoses in medical records) is prohibitively expensive
to collect at scale. We propose a distantly supervised strategy in which we use
groups of International Classification of Diseases (ICD) codes observed in
'future' records as noisy proxies for 'downstream' diagnoses. Using this we
train a transformer-based neural model to perform extractive summarization
conditioned on potential diagnoses. This model defines an attention mechanism
that is conditioned on potential diagnoses (queries) provided by the diagnosing
physician. We train (via distant supervision) and evaluate variants of this
model on EHR data from Brigham and Women's Hospital in Boston and MIMIC-III
(the latter to facilitate reproducibility). Evaluations performed by
radiologists demonstrate that these distantly supervised models yield better
extractive summaries than do unsupervised approaches. Such models may aid
diagnosis by identifying sentences in past patient reports that are clinically
relevant to a potential diagnosis. | [
"cs.LG",
"stat.ML"
] |
Knowledge Graph Embeddings (KGEs) have shown promising performance on link
prediction tasks by mapping the entities and relations from a knowledge graph
into a geometric space (usually a vector space). Ultimately, the plausibility
of the predicted links is measured by using a scoring function over the learned
embeddings (vectors). Therefore, the capability in preserving graph
characteristics including structural aspects and semantics highly depends on
the design of the KGE, as well as the inherited abilities from the underlying
geometry. Many KGEs use the flat geometry which renders them incapable of
preserving complex structures and consequently causes wrong inferences by the
models. To address this problem, we propose a neuro differential KGE that
embeds nodes of a KG on the trajectories of Ordinary Differential Equations
(ODEs). To this end, we represent each relation (edge) in a KG as a vector
field on a smooth Riemannian manifold. We specifically parameterize ODEs by a
neural network to represent various complex shape manifolds and more
importantly complex shape vector fields on the manifold. Therefore, the
underlying embedding space is capable of getting various geometric forms to
encode complexity in subgraph structures with different motifs. Experiments on
synthetic and benchmark dataset as well as social network KGs justify the ODE
trajectories as a means to structure preservation and consequently avoiding
wrong inferences over state-of-the-art KGE models. | [
"cs.LG",
"cs.AI"
] |
This paper proposes the idea of using a generative adversarial network (GAN)
to assist a novice user in designing real-world shapes with a simple interface.
The user edits a voxel grid with a painting interface (like Minecraft). Yet, at
any time, he/she can execute a SNAP command, which projects the current voxel
grid onto a latent shape manifold with a learned projection operator and then
generates a similar, but more realistic, shape using a learned generator
network. Then the user can edit the resulting shape and snap again until he/she
is satisfied with the result. The main advantage of this approach is that the
projection and generation operators assist novice users to create 3D models
characteristic of a background distribution of object shapes, but without
having to specify all the details. The core new research idea is to use a GAN
to support this application. 3D GANs have previously been used for shape
generation, interpolation, and completion, but never for interactive modeling.
The new challenge for this application is to learn a projection operator that
takes an arbitrary 3D voxel model and produces a latent vector on the shape
manifold from which a similar and realistic shape can be generated. We develop
algorithms for this and other steps of the SNAP processing pipeline and
integrate them into a simple modeling tool. Experiments with these algorithms
and tool suggest that GANs provide a promising approach to computer-assisted
interactive modeling. | [
"cs.CV",
"cs.GR"
] |
The extension of image generation to video generation turns out to be a very
difficult task, since the temporal dimension of videos introduces an extra
challenge during the generation process. Besides, due to the limitation of
memory and training stability, the generation becomes increasingly challenging
with the increase of the resolution/duration of videos. In this work, we
exploit the idea of progressive growing of Generative Adversarial Networks
(GANs) for higher resolution video generation. In particular, we begin to
produce video samples of low-resolution and short-duration, and then
progressively increase both resolution and duration alone (or jointly) by
adding new spatiotemporal convolutional layers to the current networks.
Starting from the learning on a very raw-level spatial appearance and temporal
movement of the video distribution, the proposed progressive method learns
spatiotemporal information incrementally to generate higher resolution videos.
Furthermore, we introduce a sliced version of Wasserstein GAN (SWGAN) loss to
improve the distribution learning on the video data of high-dimension and
mixed-spatiotemporal distribution. SWGAN loss replaces the distance between
joint distributions by that of one-dimensional marginal distributions, making
the loss easier to compute. We evaluate the proposed model on our collected
face video dataset of 10,900 videos to generate photorealistic face videos of
256x256x32 resolution. In addition, our model also reaches a record inception
score of 14.57 in unsupervised action recognition dataset UCF-101. | [
"cs.CV",
"cs.AI",
"stat.ML"
] |
Neural Architecture Search (NAS) was first proposed to achieve
state-of-the-art performance through the discovery of new architecture
patterns, without human intervention. An over-reliance on expert knowledge in
the search space design has however led to increased performance (local optima)
without significant architectural breakthroughs, thus preventing truly novel
solutions from being reached. In this work we 1) are the first to investigate
casting NAS as a problem of finding the optimal network generator and 2) we
propose a new, hierarchical and graph-based search space capable of
representing an extremely large variety of network types, yet only requiring
few continuous hyper-parameters. This greatly reduces the dimensionality of the
problem, enabling the effective use of Bayesian Optimisation as a search
strategy. At the same time, we expand the range of valid architectures,
motivating a multi-objective learning approach. We demonstrate the
effectiveness of this strategy on six benchmark datasets and show that our
search space generates extremely lightweight yet highly competitive models. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Virtually all aspects of modern life depend on space technology. Thanks to
the great advancement of computer vision in general and deep learning-based
techniques in particular, over the decades, the world witnessed the growing use
of deep learning in solving problems for space applications, such as
self-driving robot, tracers, insect-like robot on cosmos and health monitoring
of spacecraft. These are just some prominent examples that has advanced space
industry with the help of deep learning. However, the success of deep learning
models requires a lot of training data in order to have decent performance,
while on the other hand, there are very limited amount of publicly available
space datasets for the training of deep learning models. Currently, there is no
public datasets for space-based object detection or instance segmentation,
partly because manually annotating object segmentation masks is very time
consuming as they require pixel-level labelling, not to mention the challenge
of obtaining images from space. In this paper, we aim to fill this gap by
releasing a dataset for spacecraft detection, instance segmentation and part
recognition. The main contribution of this work is the development of the
dataset using images of space stations and satellites, with rich annotations
including bounding boxes of spacecrafts and masks to the level of object parts,
which are obtained with a mixture of automatic processes and manual efforts. We
also provide evaluations with state-of-the-art methods in object detection and
instance segmentation as a benchmark for the dataset. The link for downloading
the proposed dataset can be found on
https://github.com/Yurushia1998/SatelliteDataset. | [
"cs.CV"
] |
Attribute image manipulation has been a very active topic since the
introduction of Generative Adversarial Networks (GANs). Exploring the
disentangled attribute space within a transformation is a very challenging task
due to the multiple and mutually-inclusive nature of the facial images, where
different labels (eyeglasses, hats, hair, identity, etc.) can co-exist at the
same time. Several works address this issue either by exploiting the modality
of each domain/attribute using a conditional random vector noise, or extracting
the modality from an exemplary image. However, existing methods cannot handle
both random and reference transformations for multiple attributes, which limits
the generality of the solutions. In this paper, we successfully exploit a
multimodal representation that handles all attributes, be it guided by random
noise or exemplar images, while only using the underlying domain information of
the target domain. We present extensive qualitative and quantitative results
for facial datasets and several different attributes that show the superiority
of our method. Additionally, our method is capable of adding, removing or
changing either fine-grained or coarse attributes by using an image as a
reference or by exploring the style distribution space, and it can be easily
extended to head-swapping and face-reenactment applications without being
trained on videos. | [
"cs.CV"
] |
This work proposes a new method to accurately complete sparse LiDAR maps
guided by RGB images. For autonomous vehicles and robotics the use of LiDAR is
indispensable in order to achieve precise depth predictions. A multitude of
applications depend on the awareness of their surroundings, and use depth cues
to reason and react accordingly. On the one hand, monocular depth prediction
methods fail to generate absolute and precise depth maps. On the other hand,
stereoscopic approaches are still significantly outperformed by LiDAR based
approaches. The goal of the depth completion task is to generate dense depth
predictions from sparse and irregular point clouds which are mapped to a 2D
plane. We propose a new framework which extracts both global and local
information in order to produce proper depth maps. We argue that simple depth
completion does not require a deep network. However, we additionally propose a
fusion method with RGB guidance from a monocular camera in order to leverage
object information and to correct mistakes in the sparse input. This improves
the accuracy significantly. Moreover, confidence masks are exploited in order
to take into account the uncertainty in the depth predictions from each
modality. This fusion method outperforms the state-of-the-art and ranks first
on the KITTI depth completion benchmark. Our code with visualizations is
available. | [
"cs.CV"
] |
Video Visual Relation Detection (VidVRD), has received significant attention
of our community over recent years. In this paper, we apply the
state-of-the-art video object tracklet detection pipeline MEGA and deepSORT to
generate tracklet proposals. Then we perform VidVRD in a tracklet-based manner
without any pre-cutting operations. Specifically, we design a tracklet-based
visual Transformer. It contains a temporal-aware decoder which performs feature
interactions between the tracklets and learnable predicate query embeddings,
and finally predicts the relations. Experimental results strongly demonstrate
the superiority of our method, which outperforms other methods by a large
margin on the Video Relation Understanding (VRU) Grand Challenge in ACM
Multimedia 2021. Codes are released at
https://github.com/Dawn-LX/VidVRD-tracklets. | [
"cs.CV"
] |
Deep learning models with attention mechanisms have achieved exceptional
results for many tasks, including language tasks and recommendation systems.
Whereas previous studies have emphasized allocation of phone agents, we focused
on inbound call prediction for customer service. A common method of analyzing
user history behaviors is to extract all types of aggregated feature over time,
but that method may fail to detect users' behavioral sequences. Therefore, we
created a new approach, ET-USB, that incorporates users' sequential and
nonsequential features; we apply the powerful Transformer encoder, a
self-attention network model, to capture the information underlying user
behavior sequences. ET-USB is helpful in various business scenarios at Cathay
Financial Holdings. We conducted experiments to test the proposed network
structure's ability to process various dimensions of behavior data; the results
suggest that ET-USB delivers results superior to those of delivered by other
deep-learning models. | [
"cs.LG"
] |
We present a method for reconstructing images viewed by observers based only
on their eye movements. By exploring the relationships between gaze patterns
and image stimuli, the "What Are You Looking At?" (WAYLA) system learns to
synthesize photo-realistic images that are similar to the original pictures
being viewed. The WAYLA approach is based on the Conditional Generative
Adversarial Network (Conditional GAN) image-to-image translation technique of
Isola et al. We consider two specific applications - the first, of
reconstructing newspaper images from gaze heat maps, and the second, of
detailed reconstruction of images containing only text. The newspaper image
reconstruction process is divided into two image-to-image translation
operations, the first mapping gaze heat maps into image segmentations, and the
second mapping the generated segmentation into a newspaper image. We validate
the performance of our approach using various evaluation metrics, along with
human visual inspection. All results confirm the ability of our network to
perform image generation tasks using eye tracking data. | [
"cs.CV"
] |
The principle of Photo Response Non Uniformity (PRNU) is often exploited to
deduce the identity of the smartphone device whose camera or sensor was used to
acquire a certain image. In this work, we design an algorithm that perturbs a
face image acquired using a smartphone camera such that (a) sensor-specific
details pertaining to the smartphone camera are suppressed (sensor
anonymization); (b) the sensor pattern of a different device is incorporated
(sensor spoofing); and (c) biometric matching using the perturbed image is not
affected (biometric utility). We employ a simple approach utilizing Discrete
Cosine Transform to achieve the aforementioned objectives. Experiments
conducted on the MICHE-I and OULU-NPU datasets, which contain periocular and
facial data acquired using 12 smartphone cameras, demonstrate the efficacy of
the proposed de-identification algorithm on three different PRNU-based sensor
identification schemes. This work has application in sensor forensics and
personal privacy. | [
"cs.CV",
"eess.IV"
] |
Transfer learning which aims at utilizing knowledge learned from one problem
(source domain) to solve another different but related problem (target domain)
has attracted wide research attentions. However, the current transfer learning
methods are mostly uninterpretable, especially to people without ML expertise.
In this extended abstract, we brief introduce two knowledge graph (KG) based
frameworks towards human understandable transfer learning explanation. The
first one explains the transferability of features learned by Convolutional
Neural Network (CNN) from one domain to another through pre-training and
fine-tuning, while the second justifies the model of a target domain predicted
by models from multiple source domains in zero-shot learning (ZSL). Both
methods utilize KG and its reasoning capability to provide rich and human
understandable explanations to the transfer procedure. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Object recognition has become a crucial part of machine learning and computer
vision recently. The current approach to object recognition involves Deep
Learning and uses Convolutional Neural Networks to learn the pixel patterns of
the objects implicitly through backpropagation. However, CNNs require thousands
of examples in order to generalize successfully and often require heavy
computing resources for training. This is considered rather sluggish when
compared to the human ability to generalize and learn new categories given just
a single example. Additionally, CNNs make it difficult to explicitly
programmatically modify or intuitively interpret their learned representations.
We propose a computational model that can successfully learn an object
category from as few as one example and allows its learning style to be
tailored explicitly to a scenario. Our model decomposes each image into two
attributes: shape and color distribution. We then use a Bayesian criterion to
probabilistically determine the likelihood of each category. The model takes
each factor into account based on importance and calculates the conditional
probability of the object belonging to each learned category. Our model is not
only applicable to visual scenarios, but can also be implemented in a broader
and more practical scope of situations such as Natural Language Processing as
well as other places where it is possible to retrieve and construct individual
attributes. Because the only condition our model presents is the ability to
retrieve and construct individual attributes such as shape and color, it can be
applied to essentially any class of visual objects. | [
"cs.CV"
] |
Visual perception is critically influenced by the focus of attention. Due to
limited resources, it is well known that neural representations are biased in
favor of attended locations. Using concurrent eye-tracking and functional
Magnetic Resonance Imaging (fMRI) recordings from a large cohort of human
subjects watching movies, we first demonstrate that leveraging gaze
information, in the form of attentional masking, can significantly improve
brain response prediction accuracy in a neural encoding model. Next, we propose
a novel approach to neural encoding by including a trainable soft-attention
module. Using our new approach, we demonstrate that it is possible to learn
visual attention policies by end-to-end learning merely on fMRI response data,
and without relying on any eye-tracking. Interestingly, we find that attention
locations estimated by the model on independent data agree well with the
corresponding eye fixation patterns, despite no explicit supervision to do so.
Together, these findings suggest that attention modules can be instrumental in
neural encoding models of visual stimuli. | [
"cs.CV",
"cs.LG",
"q-bio.NC"
] |
Model-based reinforcement learning (RL) is appealing because (i) it enables
planning and thus more strategic exploration, and (ii) by decoupling dynamics
from rewards, it enables fast transfer to new reward functions. However,
learning an accurate Markov Decision Process (MDP) over high-dimensional states
(e.g., raw pixels) is extremely challenging because it requires function
approximation, which leads to compounding errors. Instead, to avoid compounding
errors, we propose learning an abstract MDP over abstract states:
low-dimensional coarse representations of the state (e.g., capturing agent
position, ignoring other objects). We assume access to an abstraction function
that maps the concrete states to abstract states. In our approach, we construct
an abstract MDP, which grows through strategic exploration via planning.
Similar to hierarchical RL approaches, the abstract actions of the abstract MDP
are backed by learned subpolicies that navigate between abstract states. Our
approach achieves strong results on three of the hardest Arcade Learning
Environment games (Montezuma's Revenge, Pitfall!, and Private Eye), including
superhuman performance on Pitfall! without demonstrations. After training on
one task, we can reuse the learned abstract MDP for new reward functions,
achieving higher reward in 1000x fewer samples than model-free methods trained
from scratch. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Facial expression recognition has been an active research area over the past
few decades, and it is still challenging due to the high intra-class variation.
Traditional approaches for this problem rely on hand-crafted features such as
SIFT, HOG and LBP, followed by a classifier trained on a database of images or
videos.
Most of these works perform reasonably well on datasets of images captured in
a controlled condition, but fail to perform as good on more challenging
datasets with more image variation and partial faces.
In recent years, several works proposed an end-to-end framework for facial
expression recognition, using deep learning models.
Despite the better performance of these works, there still seems to be a
great room for improvement.
In this work, we propose a deep learning approach based on attentional
convolutional network, which is able to focus on important parts of the face,
and achieves significant improvement over previous models on multiple datasets,
including FER-2013, CK+, FERG, and JAFFE.
We also use a visualization technique which is able to find important face
regions for detecting different emotions, based on the classifier's output.
Through experimental results, we show that different emotions seems to be
sensitive to different parts of the face. | [
"cs.CV"
] |
Transformers provide promising accuracy and have become popular and used in
various domains such as natural language processing and computer vision.
However, due to their massive number of model parameters, memory and
computation requirements, they are not suitable for resource-constrained
low-power devices. Even with high-performance and specialized devices, the
memory bandwidth can become a performance-limiting bottleneck. In this paper,
we present a performance analysis of state-of-the-art vision transformers on
several devices. We propose to reduce the overall memory footprint and memory
transfers by clustering the model parameters. We show that by using only 64
clusters to represent model parameters, it is possible to reduce the data
transfer from the main memory by more than 4x, achieve up to 22% speedup and
39% energy savings on mobile devices with less than 0.1% accuracy loss. | [
"cs.LG",
"cs.CV"
] |
Vision-based sign language recognition aims at helping deaf people to
communicate with others. However, most existing sign language datasets are
limited to a small number of words. Due to the limited vocabulary size, models
learned from those datasets cannot be applied in practice. In this paper, we
introduce a new large-scale Word-Level American Sign Language (WLASL) video
dataset, containing more than 2000 words performed by over 100 signers. This
dataset will be made publicly available to the research community. To our
knowledge, it is by far the largest public ASL dataset to facilitate word-level
sign recognition research.
Based on this new large-scale dataset, we are able to experiment with several
deep learning methods for word-level sign recognition and evaluate their
performances in large scale scenarios. Specifically we implement and compare
two different models,i.e., (i) holistic visual appearance-based approach, and
(ii) 2D human pose based approach. Both models are valuable baselines that will
benefit the community for method benchmarking. Moreover, we also propose a
novel pose-based temporal graph convolution networks (Pose-TGCN) that models
spatial and temporal dependencies in human pose trajectories simultaneously,
which has further boosted the performance of the pose-based method. Our results
show that pose-based and appearance-based models achieve comparable
performances up to 66% at top-10 accuracy on 2,000 words/glosses, demonstrating
the validity and challenges of our dataset. Our dataset and baseline deep
models are available at \url{https://dxli94.github.io/WLASL/}. | [
"cs.CV",
"cs.HC",
"cs.MM",
"cs.NE"
] |
We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | [
"cs.LG"
] |