text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Feature distillation is an effective way to improve the performance for a
smaller student model, which has fewer parameters and lower computation cost
compared to the larger teacher model. Unfortunately, there is a common obstacle
- the gap in semantic feature structure between the intermediate features of
teacher and student. The classic scheme prefers to transform intermediate
features by adding the adaptation module, such as naive convolutional,
attention-based or more complicated one. However, this introduces two problems:
a) The adaptation module brings more parameters into training. b) The
adaptation module with random initialization or special transformation isn't
friendly for distilling a pre-trained student. In this paper, we present
Matching Guided Distillation (MGD) as an efficient and parameter-free manner to
solve these problems. The key idea of MGD is to pose matching the teacher
channels with students' as an assignment problem. We compare three solutions of
the assignment problem to reduce channels from teacher features with partial
distillation loss. The overall training takes a coordinate-descent approach
between two optimization objects - assignments update and parameters update.
Since MGD only contains normalization or pooling operations with negligible
computation cost, it is flexible to plug into network with other distillation
methods. | [
"cs.CV",
"cs.AI"
] |
We present a framework for training GANs with explicit control over generated
images. We are able to control the generated image by settings exact attributes
such as age, pose, expression, etc. Most approaches for editing GAN-generated
images achieve partial control by leveraging the latent space disentanglement
properties, obtained implicitly after standard GAN training. Such methods are
able to change the relative intensity of certain attributes, but not explicitly
set their values. Recently proposed methods, designed for explicit control over
human faces, harness morphable 3D face models to allow fine-grained control
capabilities in GANs. Unlike these methods, our control is not constrained to
morphable 3D face model parameters and is extendable beyond the domain of human
faces. Using contrastive learning, we obtain GANs with an explicitly
disentangled latent space. This disentanglement is utilized to train
control-encoders mapping human-interpretable inputs to suitable latent vectors,
thus allowing explicit control. In the domain of human faces we demonstrate
control over identity, age, pose, expression, hair color and illumination. We
also demonstrate control capabilities of our framework in the domains of
painted portraits and dog image generation. We demonstrate that our approach
achieves state-of-the-art performance both qualitatively and quantitatively. | [
"cs.CV"
] |
Value function is the central notion of Reinforcement Learning (RL). Value
estimation, especially with function approximation, can be challenging since it
involves the stochasticity of environmental dynamics and reward signals that
can be sparse and delayed in some cases. A typical model-free RL algorithm
usually estimates the values of a policy by Temporal Difference (TD) or Monte
Carlo (MC) algorithms directly from rewards, without explicitly taking dynamics
into consideration. In this paper, we propose Value Decomposition with Future
Prediction (VDFP), providing an explicit two-step understanding of the value
estimation process: 1) first foresee the latent future, 2) and then evaluate
it. We analytically decompose the value function into a latent future dynamics
part and a policy-independent trajectory return part, inducing a way to model
latent dynamics and returns separately in value estimation. Further, we derive
a practical deep RL algorithm, consisting of a convolutional model to learn
compact trajectory representation from past experiences, a conditional
variational auto-encoder to predict the latent future dynamics and a convex
return model that evaluates trajectory representation. In experiments, we
empirically demonstrate the effectiveness of our approach for both off-policy
and on-policy RL in several OpenAI Gym continuous control tasks as well as a
few challenging variants with delayed reward. | [
"cs.LG"
] |
Image ordinal estimation is to predict the ordinal label of a given image,
which can be categorized as an ordinal regression problem. Recent methods
formulate an ordinal regression problem as a series of binary classification
problems. Such methods cannot ensure that the global ordinal relationship is
preserved since the relationships among different binary classifiers are
neglected. We propose a novel ordinal regression approach, termed Convolutional
Ordinal Regression Forest or CORF, for image ordinal estimation, which can
integrate ordinal regression and differentiable decision trees with a
convolutional neural network for obtaining precise and stable global ordinal
relationships. The advantages of the proposed CORF are twofold. First, instead
of learning a series of binary classifiers \emph{independently}, the proposed
method aims at learning an ordinal distribution for ordinal regression by
optimizing those binary classifiers \emph{simultaneously}. Second, the
differentiable decision trees in the proposed CORF can be trained together with
the ordinal distribution in an end-to-end manner. The effectiveness of the
proposed CORF is verified on two image ordinal estimation tasks, i.e. facial
age estimation and image aesthetic assessment, showing significant improvements
and better stability over the state-of-the-art ordinal regression methods. | [
"cs.CV"
] |
Being able to predict what may happen in the future requires an in-depth
understanding of the physical and causal rules that govern the world. A model
that is able to do so has a number of appealing applications, from robotic
planning to representation learning. However, learning to predict raw future
observations, such as frames in a video, is exceedingly challenging -- the
ambiguous nature of the problem can cause a naively designed model to average
together possible futures into a single, blurry prediction. Recently, this has
been addressed by two distinct approaches: (a) latent variational variable
models that explicitly model underlying stochasticity and (b)
adversarially-trained models that aim to produce naturalistic images. However,
a standard latent variable model can struggle to produce realistic results, and
a standard adversarially-trained model underutilizes latent variables and fails
to produce diverse predictions. We show that these distinct methods are in fact
complementary. Combining the two produces predictions that look more realistic
to human raters and better cover the range of possible futures. Our method
outperforms prior and concurrent work in these aspects. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
The principle of optimism in the face of uncertainty underpins many
theoretically successful reinforcement learning algorithms. In this paper we
provide a general framework for designing, analyzing and implementing such
algorithms in the episodic reinforcement learning problem. This framework is
built upon Lagrangian duality, and demonstrates that every model-optimistic
algorithm that constructs an optimistic MDP has an equivalent representation as
a value-optimistic dynamic programming algorithm. Typically, it was thought
that these two classes of algorithms were distinct, with model-optimistic
algorithms benefiting from a cleaner probabilistic analysis while
value-optimistic algorithms are easier to implement and thus more practical.
With the framework developed in this paper, we show that it is possible to get
the best of both worlds by providing a class of algorithms which have a
computationally efficient dynamic-programming implementation and also a simple
probabilistic analysis. Besides being able to capture many existing algorithms
in the tabular setting, our framework can also address largescale problems
under realizable function approximation, where it enables a simple model-based
analysis of some recently proposed methods. | [
"cs.LG",
"stat.ML"
] |
In this paper, we propose HOME, a framework tackling the motion forecasting
problem with an image output representing the probability distribution of the
agent's future location. This method allows for a simple architecture with
classic convolution networks coupled with attention mechanism for agent
interactions, and outputs an unconstrained 2D top-view representation of the
agent's possible future. Based on this output, we design two methods to sample
a finite set of agent's future locations. These methods allow us to control the
optimization trade-off between miss rate and final displacement error for
multiple modalities without having to retrain any part of the model. We apply
our method to the Argoverse Motion Forecasting Benchmark and achieve 1st place
on the online leaderboard. | [
"cs.CV",
"cs.RO"
] |
Recent empirical works have successfully used unlabeled data to learn feature
representations that are broadly useful in downstream classification tasks.
Several of these methods are reminiscent of the well-known word2vec embedding
algorithm: leveraging availability of pairs of semantically "similar" data
points and "negative samples," the learner forces the inner product of
representations of similar pairs with each other to be higher on average than
with negative samples. The current paper uses the term contrastive learning for
such algorithms and presents a theoretical framework for analyzing them by
introducing latent classes and hypothesizing that semantically similar points
are sampled from the same latent class. This framework allows us to show
provable guarantees on the performance of the learned representations on the
average classification task that is comprised of a subset of the same set of
latent classes. Our generalization bound also shows that learned
representations can reduce (labeled) sample complexity on downstream tasks. We
conduct controlled experiments in both the text and image domains to support
the theory. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Compression techniques for deep neural network models are becoming very
important for the efficient execution of high-performance deep learning systems
on edge-computing devices. The concept of model compression is also important
for analyzing the generalization error of deep learning, known as the
compression-based error bound. However, there is still huge gap between a
practically effective compression method and its rigorous background of
statistical learning theory. To resolve this issue, we develop a new
theoretical framework for model compression and propose a new pruning method
called {\it spectral pruning} based on this framework. We define the ``degrees
of freedom'' to quantify the intrinsic dimensionality of a model by using the
eigenvalue distribution of the covariance matrix across the internal nodes and
show that the compression ability is essentially controlled by this quantity.
Moreover, we present a sharp generalization error bound of the compressed model
and characterize the bias--variance tradeoff induced by the compression
procedure. We apply our method to several datasets to justify our theoretical
analyses and show the superiority of the the proposed method. | [
"stat.ML",
"cs.LG"
] |
Image compositing is a task of combining regions from different images to
compose a new image. A common use case is background replacement of portrait
images. To obtain high quality composites, professionals typically manually
perform multiple editing steps such as segmentation, matting and foreground
color decontamination, which is very time consuming even with sophisticated
photo editing tools. In this paper, we propose a new method which can
automatically generate high-quality image compositing without any user input.
Our method can be trained end-to-end to optimize exploitation of contextual and
color information of both foreground and background images, where the
compositing quality is considered in the optimization. Specifically, inspired
by Laplacian pyramid blending, a dense-connected multi-stream fusion network is
proposed to effectively fuse the information from the foreground and background
images at different scales. In addition, we introduce a self-taught strategy to
progressively train from easy to complex cases to mitigate the lack of training
data. Experiments show that the proposed method can automatically generate
high-quality composites and outperforms existing methods both qualitatively and
quantitatively. | [
"cs.CV"
] |
Most graph convolutional neural networks (GCNs) perform poorly in graphs
where neighbors typically have different features/classes (heterophily) and
when stacking multiple layers (oversmoothing). These two seemingly unrelated
problems have been studied independently, but there is recent empirical
evidence that solving one problem may benefit the other. In this work, going
beyond empirical observations, we aim to: (1) propose a new perspective to
analyze the heterophily and oversmoothing problems under a unified theoretical
framework, (2) identify the common causes of the two problems based on the
proposed framework, and (3) propose simple yet effective strategies that
address the common causes. Focusing on the node classification task, we use
linear separability of node representations as an indicator to reflect the
performance of GCNs and we propose to study the linear separability by
analyzing the statistical change of the node representations in the graph
convolution. We find that the relative degree of a node (compared to its
neighbors) and the heterophily level of a node's neighborhood are the root
causes that influence the separability of node representations. Our analysis
suggests that: (1) Nodes with high heterophily always produce less separable
representations after graph convolution; (2) Even with low heterophily, degree
disparity between nodes can influence the network dynamics and result in a
pseudo-heterophily situation, which helps to explain oversmoothing. Based on
our insights, we propose simple modifications to the GCN architecture -- i.e.,
degree corrections and signed messages -- which alleviate the root causes of
these issues, and also show this empirically on 9 real networks. Compared to
other approaches, which tend to work well in one regime but fail in others, our
modified GCN model consistently performs well across all settings. | [
"cs.LG"
] |
Semi-supervised learning has recently been attracting attention as an
alternative to fully supervised models that require large pools of labeled
data. Moreover, optimizing a model for multiple tasks can provide better
generalizability than single-task learning. Leveraging self-supervision and
adversarial training, we propose a novel general purpose semi-supervised,
multiple-task model---namely, self-supervised, semi-supervised, multitask
learning (S$^4$MTL)---for accomplishing two important tasks in medical imaging,
segmentation and diagnostic classification. Experimental results on chest and
spine X-ray datasets suggest that our S$^4$MTL model significantly outperforms
semi-supervised single task, semi/fully-supervised multitask, and
fully-supervised single task models, even with a 50\% reduction of class and
segmentation labels. We hypothesize that our proposed model can be effective in
tackling limited annotation problems for joint training, not only in medical
imaging domains, but also for general-purpose vision tasks. | [
"cs.CV"
] |
In reinforcement learning, wrappers are universally used to transform the
information that passes between a model and an environment. Despite their
ubiquity, no library exists with reasonable implementations of all popular
preprocessing methods. This leads to unnecessary bugs, code inefficiencies, and
wasted developer time. Accordingly we introduce SuperSuit, a Python library
that includes all popular wrappers, and wrappers that can easily apply lambda
functions to the observations/actions/reward. It's compatible with the standard
Gym environment specification, as well as the PettingZoo specification for
multi-agent environments. The library is available at
https://github.com/PettingZoo-Team/SuperSuit,and can be installed via pip. | [
"cs.LG",
"cs.AI"
] |
The task of detecting morphed face images has become highly relevant in
recent years to ensure the security of automatic verification systems based on
facial images, e.g. automated border control gates. Detection methods based on
Deep Neural Networks (DNN) have been shown to be very suitable to this end.
However, they do not provide transparency in the decision making and it is not
clear how they distinguish between genuine and morphed face images. This is
particularly relevant for systems intended to assist a human operator, who
should be able to understand the reasoning. In this paper, we tackle this
problem and present Focused Layer-wise Relevance Propagation (FLRP). This
framework explains to a human inspector on a precise pixel level, which image
regions are used by a Deep Neural Network to distinguish between a genuine and
a morphed face image. Additionally, we propose another framework to objectively
analyze the quality of our method and compare FLRP to other DNN
interpretability methods. This evaluation framework is based on removing
detected artifacts and analyzing the influence of these changes on the decision
of the DNN. Especially, if the DNN is uncertain in its decision or even
incorrect, FLRP performs much better in highlighting visible artifacts compared
to other methods. | [
"cs.CV",
"cs.CR",
"cs.LG"
] |
The options framework in reinforcement learning models the notion of a skill
or a temporally extended sequence of actions. The discovery of a reusable set
of skills has typically entailed building options, that navigate to bottleneck
states. This work adopts a complementary approach, where we attempt to discover
options that navigate to landmark states. These states are prototypical
representatives of well-connected regions and can hence access the associated
region with relative ease. In this work, we propose Successor Options, which
leverages Successor Representations to build a model of the state space. The
intra-option policies are learnt using a novel pseudo-reward and the model
scales to high-dimensional spaces easily. Additionally, we also propose an
Incremental Successor Options model that iterates between constructing
Successor Representations and building options, which is useful when robust
Successor Representations cannot be built solely from primitive actions. We
demonstrate the efficacy of our approach on a collection of grid-worlds, and on
the high-dimensional robotic control environment of Fetch. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper focuses on two main issues; first one is the impact of Similarity
Search to learning the training sample in metric space, and searching based on
supervised learning classi-fication. In particular, four metrics space
searching are based on spatial information that are introduced as the
following; Cheby-shev Distance (CD); Bray Curtis Distance (BCD); Manhattan
Distance (MD) and Euclidean Distance(ED) classifiers. The second issue
investigates the performance of combination of mul-ti-sensor images on the
supervised learning classification accura-cy. QuickBird multispectral data (MS)
and panchromatic data (PAN) have been used in this study to demonstrate the
enhance-ment and accuracy assessment of fused image over the original images.
The supervised classification results of fusion image generated better than the
MS did. QuickBird and the best results with ED classifier than the other did. | [
"cs.CV"
] |
Despite the remarkable accuracy of deep neural networks in object detection,
they are costly to train and scale due to supervision requirements.
Particularly, learning more object categories typically requires proportionally
more bounding box annotations. Weakly supervised and zero-shot learning
techniques have been explored to scale object detectors to more categories with
less supervision, but they have not been as successful and widely adopted as
supervised models. In this paper, we put forth a novel formulation of the
object detection problem, namely open-vocabulary object detection, which is
more general, more practical, and more effective than weakly supervised and
zero-shot approaches. We propose a new method to train object detectors using
bounding box annotations for a limited set of object categories, as well as
image-caption pairs that cover a larger variety of objects at a significantly
lower cost. We show that the proposed method can detect and localize objects
for which no bounding box annotation is provided during training, at a
significantly higher accuracy than zero-shot approaches. Meanwhile, objects
with bounding box annotation can be detected almost as accurately as supervised
methods, which is significantly better than weakly supervised baselines.
Accordingly, we establish a new state of the art for scalable object detection. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Synthetic data generation becomes prevalent as a solution to privacy leakage
and data shortage. Generative models are designed to generate a realistic
synthetic dataset, which can precisely express the data distribution for the
real dataset. The generative adversarial networks (GAN), which gain great
success in the computer vision fields, are doubtlessly used for synthetic data
generation. Though there are prior works that have demonstrated great progress,
most of them learn the correlations in the data distributions rather than the
true processes in which the datasets are naturally generated. Correlation is
not reliable for it is a statistical technique that only tells linear
dependencies and is easily affected by the dataset's bias. Causality, which
encodes all underlying factors of how the real data be naturally generated, is
more reliable than correlation. In this work, we propose a causal model named
Causal Tabular Generative Neural Network (Causal-TGAN) to generate synthetic
tabular data using the tabular data's causal information. Extensive experiments
on both simulated datasets and real datasets demonstrate the better performance
of our method when given the true causal graph and a comparable performance
when using the estimated causal graph. | [
"cs.LG",
"cs.AI"
] |
The focus in machine learning has branched beyond training classifiers on a
single task to investigating how previously acquired knowledge in a source
domain can be leveraged to facilitate learning in a related target domain,
known as inductive transfer learning. Three active lines of research have
independently explored transfer learning using neural networks. In weight
transfer, a model trained on the source domain is used as an initialization
point for a network to be trained on the target domain. In deep metric
learning, the source domain is used to construct an embedding that captures
class structure in both the source and target domains. In few-shot learning,
the focus is on generalizing well in the target domain based on a limited
number of labeled examples. We compare state-of-the-art methods from these
three paradigms and also explore hybrid adapted-embedding methods that use
limited target-domain data to fine tune embeddings constructed from
source-domain data. We conduct a systematic comparison of methods in a variety
of domains, varying the number of labeled instances available in the target
domain ($k$), as well as the number of target-domain classes. We reach three
principal conclusions: (1) Deep embeddings are far superior, compared to weight
transfer, as a starting point for inter-domain transfer or model re-use (2) Our
hybrid methods robustly outperform every few-shot learning and every deep
metric learning method previously proposed, with a mean error reduction of 34%
over state-of-the-art. (3) Among loss functions for discovering embeddings, the
histogram loss (Ustinova & Lempitsky, 2016) is most robust. We hope our results
will motivate a unification of research in weight transfer, deep metric
learning, and few-shot learning. | [
"cs.LG",
"stat.ML"
] |