abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
We combine two of the most popular approaches to automated Grammatical Error
Correction (GEC): GEC based on Statistical Machine Translation (SMT) and GEC
based on Neural Machine Translation (NMT). The hybrid system achieves new
state-of-the-art results on the CoNLL-2014 and JFLEG benchmarks. This GEC
system preserves the accuracy of SMT output and, at the same time, generates
more fluent sentences as it typical for NMT. Our analysis shows that the
created systems are closer to reaching human-level performance than any other
GEC system reported so far. | [] | [
"Grammatical Error Correction",
"Machine Translation"
] | [] | [
"CoNLL-2014 Shared Task (10 annotations)",
"CoNLL-2014 Shared Task",
"JFLEG"
] | [
"GLEU",
"F0.5"
] | Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation |
Actionness was introduced to quantify the likelihood of containing a generic
action instance at a specific location. Accurate and efficient estimation of
actionness is important in video analysis and may benefit other relevant tasks
such as action recognition and action detection. This paper presents a new deep
architecture for actionness estimation, called hybrid fully convolutional
network (H-FCN), which is composed of appearance FCN (A-FCN) and motion FCN
(M-FCN). These two FCNs leverage the strong capacity of deep models to estimate
actionness maps from the perspectives of static appearance and dynamic motion,
respectively. In addition, the fully convolutional nature of H-FCN allows it to
efficiently process videos with arbitrary sizes. Experiments are conducted on
the challenging datasets of Stanford40, UCF Sports, and JHMDB to verify the
effectiveness of H-FCN on actionness estimation, which demonstrate that our
method achieves superior performance to previous ones. Moreover, we apply the
estimated actionness maps on action proposal generation and action detection.
Our actionness maps advance the current state-of-the-art performance of these
tasks substantially. | [] | [
"Action Detection",
"Action Recognition",
"Temporal Action Localization"
] | [] | [
"J-HMDB-21"
] | [
"Frame-mAP"
] | Actionness Estimation Using Hybrid Fully Convolutional Networks |
We propose a novel multi-grained attention network (MGAN) model for aspect level sentiment classification. Existing approaches mostly adopt coarse-grained attention mechanism, which may bring information loss if the aspect has multiple words or larger context. We propose a fine-grained attention mechanism, which can capture the word-level interaction between aspect and context. And then we leverage the fine-grained and coarse-grained attention mechanisms to compose the MGAN framework. Moreover, unlike previous works which train each aspect with its context separately, we design an aspect alignment loss to depict the aspect-level interactions among the aspects that have the same context. We evaluate the proposed approach on three datasets: laptop and restaurant are from SemEval 2014, and the last one is a twitter dataset. Experimental results show that the multi-grained attention network consistently outperforms the state-of-the-art methods on all three datasets. We also conduct experiments to evaluate the effectiveness of aspect alignment loss, which indicates the aspect-level interactions can bring extra useful information and further improve the performance. | [] | [
"Aspect-Based Sentiment Analysis",
"Sentiment Analysis"
] | [] | [
"SemEval 2014 Task 4 Sub Task 2"
] | [
"Laptop (Acc)",
"Restaurant (Acc)",
"Mean Acc (Restaurant + Laptop)"
] | Multi-grained Attention Network for Aspect-Level Sentiment Classification |
In this paper we present state-of-the-art (SOTA) performance on the LibriSpeech corpus with two novel neural network architectures, a multistream CNN for acoustic modeling and a self-attentive simple recurrent unit (SRU) for language modeling. In the hybrid ASR framework, the multistream CNN acoustic model processes an input of speech frames in multiple parallel pipelines where each stream has a unique dilation rate for diversity. Trained with the SpecAugment data augmentation method, it achieves relative word error rate (WER) improvements of 4% on test-clean and 14% on test-other. We further improve the performance via N-best rescoring using a 24-layer self-attentive SRU language model, achieving WERs of 1.75% on test-clean and 4.46% on test-other. | [] | [
"Data Augmentation",
"Language Modelling",
"Speech Recognition"
] | [] | [
"LibriSpeech test-other",
"LibriSpeech test-clean"
] | [
"Word Error Rate (WER)"
] | ASAPP-ASR: Multistream CNN and Self-Attentive SRU for SOTA Speech Recognition |
We introduce a novel parameterized convolutional neural network for aspect level sentiment classification. Using parameterized filters and parameterized gates, we incorporate aspect information into convolutional neural networks (CNN). Experiments demonstrate that our parameterized filters and parameterized gates effectively capture the aspect-specific features, and our CNN-based models achieve excellent results on SemEval 2014 datasets. | [] | [
"Sentiment Analysis"
] | [] | [
"SemEval 2014 Task 4 Sub Task 2"
] | [
"Laptop (Acc)",
"Restaurant (Acc)",
"Mean Acc (Restaurant + Laptop)"
] | Parameterized Convolutional Neural Networks for Aspect Level Sentiment Classification |
Aspect sentiment classification (ASC) is a fundamental task in sentiment analysis. Given an aspect/target and a sentence, the task classifies the sentiment polarity expressed on the target in the sentence. Memory networks (MNs) have been used for this task recently and have achieved state-of-the-art results. In MNs, attention mechanism plays a crucial role in detecting the sentiment context for the given target. However, we found an important problem with the current MNs in performing the ASC task. Simply improving the attention mechanism will not solve it. The problem is referred to as target-sensitive sentiment, which means that the sentiment polarity of the (detected) context is dependent on the given target and it cannot be inferred from the context alone. To tackle this problem, we propose the target-sensitive memory networks (TMNs). Several alternative techniques are designed for the implementation of TMNs and their effectiveness is experimentally evaluated. | [] | [
"Aspect-Based Sentiment Analysis",
"Sentiment Analysis"
] | [] | [
"SemEval 2014 Task 4 Sub Task 2"
] | [
"Laptop (Acc)",
"Restaurant (Acc)",
"Mean Acc (Restaurant + Laptop)"
] | Target-Sensitive Memory Networks for Aspect Sentiment Classification |
Annotation errors and bias are inevitable among different facial expression datasets due to the subjectiveness of annotating facial expressions. Ascribe to the inconsistent annotations, performance of existing facial expression recognition (FER) methods cannot keep improving when the training set is enlarged by merging multiple datasets. To address the inconsistency, we propose an Inconsistent Pseudo Annotations to Latent Truth(IPA2LT) framework to train a FER model from multiple inconsistently labeled datasets and large scale unlabeled data. In IPA2LT, we assign each sample more than one labels with human annotations or model predictions. Then, we propose an end-to-end LTNet with a scheme of discovering the latent truth from the inconsistent pseudo labels and the input face images. To our knowledge, IPA2LT serves as the first work to solve the training problem with inconsistently labeled FER datasets. Experiments on synthetic data validate the effectiveness of the proposed method in learning from inconsistent labels. We also conduct extensive experiments in FER and show that our method outperforms other state-of-the-art and optional methods under a rigorous evaluation protocol involving 7 FER datasets. | [] | [
"Facial Expression Recognition"
] | [] | [
"AffectNet"
] | [
"Accuracy (7 emotion)",
"Accuracy (8 emotion)"
] | Facial Expression Recognition with Inconsistently Annotated Datasets |
ConvNets achieve good results when training from clean data, but learning from noisy labels significantly degrades performances and remains challenging. Unlike previous works constrained by many conditions, making them infeasible to real noisy cases, this work presents a novel deep self-learning framework to train a robust network on the real noisy datasets without extra supervision. The proposed approach has several appealing benefits. (1) Different from most existing work, it does not rely on any assumption on the distribution of the noisy labels, making it robust to real noises. (2) It does not need extra clean supervision or accessorial network to help training. (3) A self-learning framework is proposed to train the network in an iterative end-to-end manner, which is effective and efficient. Extensive experiments in challenging benchmarks such as Clothing1M and Food101-N show that our approach outperforms its counterparts in all empirical settings. | [] | [
"Image Classification",
"Learning with noisy labels"
] | [] | [
"Food-101N",
"Clothing1M"
] | [
"Accuracy"
] | Deep Self-Learning From Noisy Labels |
Label noise is increasingly prevalent in datasets acquired from noisy channels. Existing approaches that detect and remove label noise generally rely on some form of supervision, which is not scalable and error-prone. In this paper, we propose NoiseRank, for unsupervised label noise reduction using Markov Random Fields (MRF). We construct a dependence model to estimate the posterior probability of an instance being incorrectly labeled given the dataset, and rank instances based on their estimated probabilities. Our method 1) Does not require supervision from ground-truth labels, or priors on label or noise distribution. 2) It is interpretable by design, enabling transparency in label noise removal. 3) It is agnostic to classifier architecture/optimization framework and content modality. These advantages enable wide applicability in real noise settings, unlike prior works constrained by one or more conditions. NoiseRank improves state-of-the-art classification on Food101-N (~20% noise), and is effective on high noise Clothing-1M (~40% noise). | [] | [
"Image Classification"
] | [] | [
"Clothing1M"
] | [
"Accuracy"
] | NoiseRank: Unsupervised Label Noise Reduction with Dependence Models |
Learning powerful data embeddings has become a center piece in machine learning, especially in natural language processing and computer vision domains. The crux of these embeddings is that they are pretrained on huge corpus of data in a unsupervised fashion, sometimes aided with transfer learning. However currently in the graph learning domain, embeddings learned through existing graph neural networks (GNNs) are task dependent and thus cannot be shared across different datasets. In this paper, we present a first powerful and theoretically guaranteed graph neural network that is designed to learn task-independent graph embeddings, thereafter referred to as deep universal graph embedding (DUGNN). Our DUGNN model incorporates a novel graph neural network (as a universal graph encoder) and leverages rich Graph Kernels (as a multi-task graph decoder) for both unsupervised learning and (task-specific) adaptive supervised learning. By learning task-independent graph embeddings across diverse datasets, DUGNN also reaps the benefits of transfer learning. Through extensive experiments and ablation studies, we show that the proposed DUGNN model consistently outperforms both the existing state-of-art GNN models and Graph Kernels by an increased accuracy of 3% - 8% on graph classification benchmark datasets. | [] | [
"Graph Classification",
"Graph Embedding",
"Graph Learning",
"Transfer Learning"
] | [] | [
"COLLAB",
"ENZYMES",
"IMDb-B",
"PROTEINS",
"D&D",
"IMDb-M",
"PTC"
] | [
"Accuracy"
] | Learning Universal Graph Neural Network Embeddings With Aid Of Transfer Learning |
Graph learning is currently dominated by graph kernels, which, while powerful, suffer some significant limitations. Convolutional Neural Networks (CNNs) offer a very appealing alternative, but processing graphs with CNNs is not trivial. To address this challenge, many sophisticated extensions of CNNs have recently been introduced. In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. Experiments reveal that our method is more accurate than state-of-the-art graph kernels and graph CNNs on 4 out of 6 real-world datasets (with and without continuous node attributes), and close elsewhere. Our approach is also preferable to graph kernels in terms of time complexity. Code and data are publicly available. | [] | [
"Graph Classification",
"Graph Learning"
] | [] | [
"COLLAB",
"RE-M12K",
"IMDb-B",
"RE-M5K"
] | [
"Accuracy"
] | Graph Classification with 2D Convolutional Neural Networks |
Multi-choice Machine Reading Comprehension (MMRC) aims to select the correct answer from a set of options based on a given passage and question. Due to task specific of MMRC, it is non-trivial to transfer knowledge from other MRC tasks such as SQuAD, Dream. In this paper, we simply reconstruct multi-choice to single-choice by training a binary classification to distinguish whether a certain answer is correct. Then select the option with the highest confidence score. We construct our model upon ALBERT-xxlarge model and estimate it on the RACE dataset. During training, We adopt AutoML strategy to tune better parameters. Experimental results show that the single-choice is better than multi-choice. In addition, by transferring knowledge from other kinds of MRC tasks, our model achieves a new state-of-the-art results in both single and ensemble settings. | [] | [
"AutoML",
"Machine Reading Comprehension",
"Reading Comprehension",
"Transfer Learning"
] | [] | [
"RACE"
] | [
"Accuracy"
] | Improving Machine Reading Comprehension with Single-choice Decision and Transfer Learning |
Visual dialog is a challenging vision-language task, which requires the agent
to answer multi-round questions about an image. It typically needs to address
two major problems: (1) How to answer visually-grounded questions, which is the
core challenge in visual question answering (VQA); (2) How to infer the
co-reference between questions and the dialog history. An example of visual
co-reference is: pronouns (\eg, ``they'') in the question (\eg, ``Are they on
or off?'') are linked with nouns (\eg, ``lamps'') appearing in the dialog
history (\eg, ``How many lamps are there?'') and the object grounded in the
image. In this work, to resolve the visual co-reference for visual dialog, we
propose a novel attention mechanism called Recursive Visual Attention (RvA).
Specifically, our dialog agent browses the dialog history until the agent has
sufficient confidence in the visual co-reference resolution, and refines the
visual attention recursively. The quantitative and qualitative experimental
results on the large-scale VisDial v0.9 and v1.0 datasets demonstrate that the
proposed RvA not only outperforms the state-of-the-art methods, but also
achieves reasonable recursion and interpretable attention maps without
additional annotations. The code is available at
\url{https://github.com/yuleiniu/rva}. | [] | [
"Question Answering",
"Visual Dialog",
"Visual Question Answering"
] | [] | [
"Visual Dialog v1.0 test-std",
"VisDial v0.9 val"
] | [
"MRR (x 100)",
"R@10",
"NDCG (x 100)",
"R@5",
"Mean Rank",
"MRR",
"Mean",
"R@1"
] | Recursive Visual Attention in Visual Dialog |
Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019). However, current phrase retrieval models heavily depend on their sparse representations while still underperforming retriever-reader approaches. In this work, we show for the first time that we can learn dense phrase representations alone that achieve much stronger performance in open-domain QA. Our approach includes (1) learning query-agnostic phrase representations via question generation and distillation; (2) novel negative-sampling methods for global normalization; (3) query-side fine-tuning for transfer learning. On five popular QA datasets, our model DensePhrases improves previous phrase retrieval models by 15%-25% absolute accuracy and matches the performance of state-of-the-art retriever-reader models. Our model is easy to parallelize due to pure dense representations and processes more than 10 questions per second on CPUs. Finally, we directly use our pre-indexed dense phrase representations for two slot filling tasks, showing the promise of utilizing DensePhrases as a dense knowledge base for downstream tasks. | [] | [
"Open-Domain Question Answering",
"Question Answering",
"Question Generation",
"Slot Filling",
"Transfer Learning"
] | [] | [
"Natural Questions (long)",
"SQuAD1.1 dev",
"KILT: Zero Shot RE",
"KILT: T-REx"
] | [
"R-Prec",
"Recall@5",
"F1",
"KILT-F1",
"Accuracy",
"EM",
"KILT-AC"
] | Learning Dense Representations of Phrases at Scale |
Anomaly detection is a challenging task and usually formulated as an unsupervised learning problem for the unexpectedness of anomalies. This paper proposes a simple yet powerful approach to this issue, which is implemented in the student-teacher framework for its advantages but substantially extends it in terms of both accuracy and efficiency. Given a strong model pre-trained on image classification as the teacher, we distill the knowledge into a single student network with the identical architecture to learn the distribution of anomaly-free images and this one-step transfer preserves the crucial clues as much as possible. Moreover, we integrate the multi-scale feature matching strategy into the framework, and this hierarchical feature alignment enables the student network to receive a mixture of multi-level knowledge from the feature pyramid under better supervision, thus allowing to detect anomalies of various sizes. The difference between feature pyramids generated by the two networks serves as a scoring function indicating the probability of anomaly occurring. Due to such operations, our approach achieves accurate and fast pixel-level anomaly detection. Very competitive results are delivered on three major benchmarks, significantly superior to the state of the art ones. In addition, it makes inferences at a very high speed (with 100 FPS for images of the size at 256x256), at least dozens of times faster than the latest counterparts. | [] | [
"Anomaly Detection",
"Image Classification",
"Unsupervised Anomaly Detection"
] | [] | [
"MVTec AD"
] | [
"Detection AUROC",
"Segmentation AUROC"
] | Student-Teacher Feature Pyramid Matching for Unsupervised Anomaly Detection |
We present deep communicating agents in an encoder-decoder architecture to
address the challenges of representing a long document for abstractive
summarization. With deep communicating agents, the task of encoding a long text
is divided across multiple collaborating agents, each in charge of a subsection
of the input text. These encoders are connected to a single decoder, trained
end-to-end using reinforcement learning to generate a focused and coherent
summary. Empirical results demonstrate that multiple communicating encoders
lead to a higher quality summary compared to several strong baselines,
including those based on a single encoder or multiple non-communicating
encoders. | [] | [
"Abstractive Text Summarization"
] | [] | [
"CNN / Daily Mail"
] | [
"ROUGE-L",
"ROUGE-1",
"ROUGE-2"
] | Deep Communicating Agents for Abstractive Summarization |
Existing pose estimation approaches fall into two categories: single-stage and multi-stage methods. While multi-stage methods are seemingly more suited for the task, their performance in current practice is not as good as single-stage methods. This work studies this issue. We argue that the current multi-stage methods' unsatisfactory performance comes from the insufficiency in various design choices. We propose several improvements, including the single-stage module design, cross stage feature aggregation, and coarse-to-fine supervision. The resulting method establishes the new state-of-the-art on both MS COCO and MPII Human Pose dataset, justifying the effectiveness of a multi-stage architecture. The source code is publicly available for further research. | [] | [
"Keypoint Detection",
"Pose Estimation"
] | [] | [
"COCO",
"COCO test-challenge",
"COCO minival",
"MPII Human Pose",
"COCO test-dev"
] | [
"ARM",
"Test AP",
"APM",
"AR75",
"PCKh-0.5",
"AR50",
"ARL",
"AP75",
"AP",
"APL",
"AP50",
"AR"
] | Rethinking on Multi-Stage Networks for Human Pose Estimation |
Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations. Also, due to the lack of good quantitative metrics for image completion, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS), which robustly measures the perceptual fidelity of inpainted images compared to real images via linear separability in a feature space. Experiments demonstrate superior performance in terms of both quality and diversity over state-of-the-art methods in free-form image completion and easy generalization to image-to-image translation. Code is available at https://github.com/zsyzzsoft/co-mod-gan. | [] | [] | [] | [
"Places2",
"FFHQ 512 x 512"
] | [
"FID",
"P-IDS",
"U-IDS"
] | Large Scale Image Completion via Co-Modulated Generative Adversarial Networks |
Part-based approaches for fine-grained recognition do not show the expected performance gain over global methods, although being able to explicitly focus on small details that are relevant for distinguishing highly similar classes. We assume that part-based methods suffer from a missing representation of local features, which is invariant to the order of parts and can handle a varying number of visible parts appropriately. The order of parts is artificial and often only given by ground-truth annotations, whereas viewpoint variations and occlusions result in parts that are not observable. Therefore, we propose integrating a Fisher vector encoding of part features into convolutional neural networks. The parameters for this encoding are estimated jointly with those of the neural network in an end-to-end manner. Our approach improves state-of-the-art accuracies for bird species classification on CUB-200-2011 from 90.40\% to 90.95\%, on NA-Birds from 89.20\% to 90.30\%, and on Birdsnap from 84.30\% to 86.97\%. | [] | [
"Fine-Grained Image Classification"
] | [] | [
" CUB-200-2011"
] | [
"Accuracy"
] | End-to-end Learning of a Fisher Vector Encoding for Part Features in Fine-grained Recognition |
We propose OmniPose, a single-pass, end-to-end trainable framework, that achieves state-of-the-art results for multi-person pose estimation. Using a novel waterfall module, the OmniPose architecture leverages multi-scale feature representations that increase the effectiveness of backbone feature extractors, without the need for post-processing. OmniPose incorporates contextual information across scales and joint localization with Gaussian heatmap modulation at the multi-scale feature extractor to estimate human pose with state-of-the-art accuracy. The multi-scale representations, obtained by the improved waterfall module in OmniPose, leverage the efficiency of progressive filtering in the cascade architecture, while maintaining multi-scale fields-of-view comparable to spatial pyramid configurations. Our results on multiple datasets demonstrate that OmniPose, with an improved HRNet backbone and waterfall module, is a robust and efficient architecture for multi-person pose estimation that achieves state-of-the-art results. | [] | [] | [] | [
"COCO",
"UPenn Action",
"Leeds Sports Poses",
"MPII Human Pose"
] | [
"Validation AP",
"PCKh-0.5",
"Mean PCK@0.2",
"AP",
"PCK"
] | OmniPose: A Multi-Scale Framework for Multi-Person Pose Estimation |
The recent research in semi-supervised learning (SSL) is mostly dominated by consistency regularization based methods which achieve strong performance. However, they heavily rely on domain-specific data augmentations, which are not easy to generate for all data modalities. Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation. We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models; these predictions generate many incorrect pseudo-labels, leading to noisy training. We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process. Furthermore, UPS generalizes the pseudo-labeling process, allowing for the creation of negative pseudo-labels; these negative pseudo-labels can be used for multi-label classification as well as negative learning to improve the single-label classification. We achieve strong performance when compared to recent SSL methods on the CIFAR-10 and CIFAR-100 datasets. Also, we demonstrate the versatility of our method on the video dataset UCF-101 and the multi-label dataset Pascal VOC. | [] | [
"Multi-Label Classification",
"Semi-Supervised Image Classification",
"Semi-Supervised Video Classification"
] | [] | [
"CIFAR-100, 4000 Labels",
"cifar-100, 10000 Labels",
"CIFAR-10, 4000 Labels",
"CIFAR-10, 1000 Labels"
] | [
"Accuracy"
] | In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning |
The paper discusses a pooling mechanism to induce subsampling in graph structured data and introduces it as a component of a graph convolutional neural network. The pooling mechanism builds on the Non-Negative Matrix Factorization (NMF) of a matrix representing node adjacency and node similarity as adaptively obtained through the vertices embedding learned by the model. Such mechanism is applied to obtain an incrementally coarser graph where nodes are adaptively pooled into communities based on the outcomes of the non-negative factorization. The empirical analysis on graph classification benchmarks shows how such coarsening process yields significant improvements in the predictive performance of the model with respect to its non-pooled counterpart. | [] | [
"Graph Classification"
] | [] | [
"COLLAB",
"ENZYMES",
"D&D",
"PROTEINS",
"NCI1"
] | [
"Accuracy"
] | A Non-Negative Factorization approach to node pooling in Graph Convolutional Neural Networks |
This paper presents results of our experiments for the next utterance ranking
on the Ubuntu Dialog Corpus -- the largest publicly available multi-turn dialog
corpus. First, we use an in-house implementation of previously reported models
to do an independent evaluation using the same data. Second, we evaluate the
performances of various LSTMs, Bi-LSTMs and CNNs on the dataset. Third, we
create an ensemble by averaging predictions of multiple models. The ensemble
further improves the performance and it achieves a state-of-the-art result for
the next utterance ranking on this dataset. Finally, we discuss our future
plans using this corpus. | [] | [
"Conversational Response Selection"
] | [] | [
"Ubuntu Dialogue (v1, Ranking)"
] | [
"R10@1",
"R10@5",
"R2@1",
"R10@2"
] | Improved Deep Learning Baselines for Ubuntu Corpus Dialogs |
Graph kernels have attracted a lot of attention during the last decade, and
have evolved into a rapidly developing branch of learning on structured data.
During the past 20 years, the considerable research activity that occurred in
the field resulted in the development of dozens of graph kernels, each focusing
on specific structural properties of graphs. Graph kernels have proven
successful in a wide range of domains, ranging from social networks to
bioinformatics. The goal of this survey is to provide a unifying view of the
literature on graph kernels. In particular, we present a comprehensive overview
of a wide range of graph kernels. Furthermore, we perform an experimental
evaluation of several of those kernels on publicly available datasets, and
provide a comparative study. Finally, we discuss key applications of graph
kernels, and outline some challenges that remain to be addressed. | [] | [
"Graph Classification"
] | [] | [
"PROTEINS",
"NCI1"
] | [
"Accuracy"
] | Graph Kernels: A Survey |
With the advantage of high mobility, Unmanned Aerial Vehicles (UAVs) are used
to fuel numerous important applications in computer vision, delivering more
efficiency and convenience than surveillance cameras with fixed camera angle,
scale and view. However, very limited UAV datasets are proposed, and they focus
only on a specific task such as visual tracking or object detection in
relatively constrained scenarios. Consequently, it is of great importance to
develop an unconstrained UAV benchmark to boost related researches. In this
paper, we construct a new UAV benchmark focusing on complex scenarios with new
level challenges. Selected from 10 hours raw videos, about 80,000
representative frames are fully annotated with bounding boxes as well as up to
14 kinds of attributes (e.g., weather condition, flying altitude, camera view,
vehicle category, and occlusion) for three fundamental computer vision tasks:
object detection, single object tracking, and multiple object tracking. Then, a
detailed quantitative study is performed using most recent state-of-the-art
algorithms for each task. Experimental results show that the current
state-of-the-art methods perform relative worse on our dataset, due to the new
challenges appeared in UAV based real scenes, e.g., high density, small object,
and camera motion. To our knowledge, our work is the first time to explore such
issues in unconstrained scenes comprehensively. | [] | [
"Multiple Object Tracking",
"Object Detection",
"Object Tracking",
"Visual Tracking"
] | [] | [
"UAVDT"
] | [
"mAP"
] | The Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking |
Retrieving information from an online search engine, is the first and most important step in many data mining tasks. Most of the search engines currently available on the web, including all social media platforms, are black-boxes (a.k.a opaque) supporting short keyword queries. In these settings, retrieving all posts and comments discussing a particular news item automatically and at large scales is a challenging task. In this paper, we propose a method for generating short keyword queries given a prototype document. The proposed iterative query selection algorithm (IQS) interacts with the opaque search engine to iteratively improve the query. It is evaluated on the Twitter TREC Microblog 2012 and TREC-COVID 2019 datasets showing superior performance compared to state-of-the-art. IQS is applied to automatically collect a large-scale fake news dataset of about 70K true and fake news items. The dataset, publicly available for research, includes more than 22M accounts and 61M tweets in Twitter approved format. We demonstrate the usefulness of the dataset for fake news detection task achieving state-of-the-art performance. | [] | [
"Fake News Detection"
] | [] | [
"LIAR"
] | [
"10%"
] | Fake News Data Collection and Classification: Iterative Query Selection for Opaque Search Engines with Pseudo Relevance Feedback |
Facial aging and facial rejuvenation analyze a given face photograph to
predict a future look or estimate a past look of the person. To achieve this,
it is critical to preserve human identity and the corresponding aging
progression and regression with high accuracy. However, existing methods cannot
simultaneously handle these two objectives well. We propose a novel generative
adversarial network based approach, named the Conditional Multi-Adversarial
AutoEncoder with Ordinal Regression (CMAAE-OR). It utilizes an age estimation
technique to control the aging accuracy and takes a high-level feature
representation to preserve personalized identity. Specifically, the face is
first mapped to a latent vector through a convolutional encoder. The latent
vector is then projected onto the face manifold conditional on the age through
a deconvolutional generator. The latent vector preserves personalized face
features and the age controls facial aging and rejuvenation. A discriminator
and an ordinal regression are imposed on the encoder and the generator in
tandem, making the generated face images to be more photorealistic while
simultaneously exhibiting desirable aging effects. Besides, a high-level
feature representation is utilized to preserve personalized identity of the
generated face. Experiments on two benchmark datasets demonstrate appealing
performance of the proposed method over the state-of-the-art. | [] | [
"Age Estimation",
"Regression"
] | [] | [
"FGNET",
"MORPH"
] | [
"MAE"
] | Facial Aging and Rejuvenation by Conditional Multi-Adversarial Autoencoder with Ordinal Regression |
The Neural Autoregressive Distribution Estimator (NADE) and its real-valued
version RNADE are competitive density models of multidimensional data across a
variety of domains. These models use a fixed, arbitrary ordering of the data
dimensions. One can easily condition on variables at the beginning of the
ordering, and marginalize out variables at the end of the ordering, however
other inference tasks require approximate inference. In this work we introduce
an efficient procedure to simultaneously train a NADE model for each possible
ordering of the variables, by sharing parameters across all these models. We
can thus use the most convenient model for each inference task at hand, and
ensembles of such models with different orderings are immediately available.
Moreover, unlike the original NADE, our training procedure scales to deep
models. Empirically, ensembles of Deep NADE models obtain state of the art
density estimation performance. | [] | [
"Density Estimation",
"Image Generation"
] | [] | [
"Binarized MNIST"
] | [
"nats"
] | A Deep and Tractable Density Estimator |
In this paper we describe a method to perform sequence-discriminative training of neural network acoustic models without the need for frame-level cross-entropy pre-training. We use the lattice-free version of the maximum mutual information
(MMI) criterion: LF-MMI. To make its computation feasible we use a phone n-gram language model, in place of the word language model. To further reduce its space and time complexity we compute the objective function using neural network outputs at one third the standard frame rate. These changes enable us to perform the computation for the forward-backward algorithm on GPUs. Further the reduced output frame-rate also provides a significant speed-up during decoding.
We present results on 5 different LVCSR tasks with training data ranging from 100 to 2100 hours. Models trained with LFMMI provide a relative word error rate reduction of ∼11.5%, over those trained with cross-entropy objective function, and ∼8%, over those trained with cross-entropy and sMBR objective functions. A further reduction of ∼2.5%, relative, can be obtained by fine tuning these models with the word-lattice based sMBR objective function. | [] | [
"Language Modelling",
"Large Vocabulary Continuous Speech Recognition",
"Speech Recognition"
] | [] | [
"WSJ eval92"
] | [
"Word Error Rate (WER)"
] | Purely sequence-trained neural networks for ASR based on lattice-free MMI |
Recent findings indicate that over-parametrization, while crucial for
successfully training deep neural networks, also introduces large amounts of
redundancy. Tensor methods have the potential to efficiently parametrize
over-complete representations by leveraging this redundancy. In this paper, we
propose to fully parametrize Convolutional Neural Networks (CNNs) with a single
high-order, low-rank tensor. Previous works on network tensorization have
focused on parametrizing individual layers (convolutional or fully connected)
only, and perform the tensorization layer-by-layer separately. In contrast, we
propose to jointly capture the full structure of a neural network by
parametrizing it with a single high-order tensor, the modes of which represent
each of the architectural design parameters of the network (e.g. number of
convolutional blocks, depth, number of stacks, input features, etc). This
parametrization allows to regularize the whole network and drastically reduce
the number of parameters. Our model is end-to-end trainable and the low-rank
structure imposed on the weight tensor acts as an implicit regularization. We
study the case of networks with rich structure, namely Fully Convolutional
Networks (FCNs), which we propose to parametrize with a single 8th-order
tensor. We show that our approach can achieve superior performance with small
compression rates, and attain high compression rates with negligible drop in
accuracy for the challenging task of human pose estimation. | [] | [
"Pose Estimation"
] | [] | [
"MPII Human Pose"
] | [
"PCKh-0.5"
] | T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor |
Aggregating extra features has been considered as an effective approach to
boost traditional pedestrian detection methods. However, there is still a lack
of studies on whether and how CNN-based pedestrian detectors can benefit from
these extra features. The first contribution of this paper is exploring this
issue by aggregating extra features into CNN-based pedestrian detection
framework. Through extensive experiments, we evaluate the effects of different
kinds of extra features quantitatively. Moreover, we propose a novel network
architecture, namely HyperLearner, to jointly learn pedestrian detection as
well as the given extra feature. By multi-task training, HyperLearner is able
to utilize the information of given features and improve detection performance
without extra inputs in inference. The experimental results on multiple
pedestrian benchmarks validate the effectiveness of the proposed HyperLearner. | [] | [
"Pedestrian Detection"
] | [] | [
"Caltech"
] | [
"Reasonable Miss Rate"
] | What Can Help Pedestrian Detection? |
Articulated human pose estimation is a fundamental yet challenging task in
computer vision. The difficulty is particularly pronounced in scale variations
of human body parts when camera view changes or severe foreshortening happens.
Although pyramid methods are widely used to handle scale changes at inference
time, learning feature pyramids in deep convolutional neural networks (DCNNs)
is still not well explored. In this work, we design a Pyramid Residual Module
(PRMs) to enhance the invariance in scales of DCNNs. Given input features, the
PRMs learn convolutional filters on various scales of input features, which are
obtained with different subsampling ratios in a multi-branch network. Moreover,
we observe that it is inappropriate to adopt existing methods to initialize the
weights of multi-branch networks, which achieve superior performance than plain
networks in many tasks recently. Therefore, we provide theoretic derivation to
extend the current weight initialization scheme to multi-branch network
structures. We investigate our method on two standard benchmarks for human pose
estimation. Our approach obtains state-of-the-art results on both benchmarks.
Code is available at https://github.com/bearpaw/PyraNet. | [] | [
"Pose Estimation"
] | [] | [
"Leeds Sports Poses",
"MPII Human Pose"
] | [
"PCK",
"PCKh-0.5"
] | Learning Feature Pyramids for Human Pose Estimation |
In this paper, we propose to incorporate convolutional neural networks with a
multi-context attention mechanism into an end-to-end framework for human pose
estimation. We adopt stacked hourglass networks to generate attention maps from
features at multiple resolutions with various semantics. The Conditional Random
Field (CRF) is utilized to model the correlations among neighboring regions in
the attention map. We further combine the holistic attention model, which
focuses on the global consistency of the full human body, and the body part
attention model, which focuses on the detailed description for different body
parts. Hence our model has the ability to focus on different granularity from
local salient regions to global semantic-consistent spaces. Additionally, we
design novel Hourglass Residual Units (HRUs) to increase the receptive field of
the network. These units are extensions of residual units with a side branch
incorporating filters with larger receptive fields, hence features with various
scales are learned and combined within the HRUs. The effectiveness of the
proposed multi-context attention mechanism and the hourglass residual units is
evaluated on two widely used human pose estimation benchmarks. Our approach
outperforms all existing methods on both benchmarks over all the body parts. | [] | [
"Pose Estimation"
] | [] | [
"Leeds Sports Poses",
"MPII Human Pose"
] | [
"PCK",
"PCKh-0.5"
] | Multi-Context Attention for Human Pose Estimation |
Random data augmentation is a critical technique to avoid overfitting in
training deep neural network models. However, data augmentation and network
training are usually treated as two isolated processes, limiting the
effectiveness of network training. Why not jointly optimize the two? We propose
adversarial data augmentation to address this limitation. The main idea is to
design an augmentation network (generator) that competes against a target
network (discriminator) by generating `hard' augmentation operations online.
The augmentation network explores the weaknesses of the target network, while
the latter learns from `hard' augmentations to achieve better performance. We
also design a reward/penalty strategy for effective joint training. We
demonstrate our approach on the problem of human pose estimation and carry out
a comprehensive experimental analysis, showing that our method can
significantly improve state-of-the-art models without additional data efforts. | [] | [
"Data Augmentation",
"Pose Estimation"
] | [] | [
"Leeds Sports Poses",
"MPII Human Pose"
] | [
"PCK",
"PCKh-0.5"
] | Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation |
Human pose estimation using deep neural networks aims to map input images
with large variations into multiple body keypoints which must satisfy a set of
geometric constraints and inter-dependency imposed by the human body model.
This is a very challenging nonlinear manifold learning process in a very high
dimensional feature space. We believe that the deep neural network, which is
inherently an algebraic computation system, is not the most effecient way to
capture highly sophisticated human knowledge, for example those highly coupled
geometric characteristics and interdependence between keypoints in human poses.
In this work, we propose to explore how external knowledge can be effectively
represented and injected into the deep neural networks to guide its training
process using learned projections that impose proper prior. Specifically, we
use the stacked hourglass design and inception-resnet module to construct a
fractal network to regress human pose images into heatmaps with no explicit
graphical modeling. We encode external knowledge with visual features which are
able to characterize the constraints of human body models and evaluate the
fitness of intermediate network output. We then inject these external features
into the neural network using a projection matrix learned using an auxiliary
cost function. The effectiveness of the proposed inception-resnet module and
the benefit in guided learning with knowledge projection is evaluated on two
widely used benchmarks. Our approach achieves state-of-the-art performance on
both datasets. | [] | [
"Pose Estimation"
] | [] | [
"Leeds Sports Poses",
"MPII Human Pose"
] | [
"PCK",
"PCKh-0.5"
] | Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation |
Existing human pose estimation approaches often only consider how to improve
the model generalisation performance, but putting aside the significant
efficiency problem. This leads to the development of heavy models with poor
scalability and cost-effectiveness in practical use. In this work, we
investigate the under-studied but practically critical pose model efficiency
problem. To this end, we present a new Fast Pose Distillation (FPD) model
learning strategy. Specifically, the FPD trains a lightweight pose neural
network architecture capable of executing rapidly with low computational cost.
It is achieved by effectively transferring the pose structure knowledge of a
strong teacher network. Extensive evaluations demonstrate the advantages of our
FPD method over a broad range of state-of-the-art pose estimation approaches in
terms of model cost-effectiveness on two standard benchmark datasets, MPII
Human Pose and Leeds Sports Pose. | [] | [
"Pose Estimation"
] | [] | [
"Leeds Sports Poses",
"MPII Human Pose"
] | [
"PCK",
"PCKh-0.5"
] | Fast Human Pose Estimation |
In this paper we consider the problem of human pose estimation from a single
still image. We propose a novel approach where each location in the image votes
for the position of each keypoint using a convolutional neural net. The voting
scheme allows us to utilize information from the whole image, rather than rely
on a sparse set of keypoint locations. Using dense, multi-target votes, not
only produces good keypoint predictions, but also enables us to compute
image-dependent joint keypoint probabilities by looking at consensus voting.
This differs from most previous methods where joint probabilities are learned
from relative keypoint locations and are independent of the image. We finally
combine the keypoints votes and joint probabilities in order to identify the
optimal pose configuration. We show our competitive performance on the MPII
Human Pose and Leeds Sports Pose datasets. | [] | [
"Pose Estimation"
] | [] | [
"MPII Human Pose"
] | [
"PCKh-0.5"
] | Human Pose Estimation using Deep Consensus Voting |
End-to-end automatic speech recognition (ASR) models with a single neural network have recently demonstrated state-of-the-art results compared to conventional hybrid speech recognizers. Specifically, recurrent neural network transducer (RNN-T) has shown competitive ASR performance on various benchmarks. In this work, we examine ways in which RNN-T can achieve better ASR accuracy via performing auxiliary tasks. We propose (i) using the same auxiliary task as primary RNN-T ASR task, and (ii) performing context-dependent graphemic state prediction as in conventional hybrid modeling. In transcribing social media videos with varying training data size, we first evaluate the streaming ASR performance on three languages: Romanian, Turkish and German. We find that both proposed methods provide consistent improvements. Next, we observe that both auxiliary tasks demonstrate efficacy in learning deep transformer encoders for RNN-T criterion, thus achieving competitive results - 2.0%/4.2% WER on LibriSpeech test-clean/other - as compared to prior top performing models. | [] | [
"Speech Recognition"
] | [] | [
"LibriSpeech test-other",
"LibriSpeech test-clean"
] | [
"Word Error Rate (WER)"
] | Improving RNN Transducer Based ASR with Auxiliary Tasks |
Joint segmentation and classification of fine-grained actions is important
for applications of human-robot interaction, video surveillance, and human
skill evaluation. However, despite substantial recent progress in large-scale
action classification, the performance of state-of-the-art fine-grained action
recognition approaches remains low. We propose a model for action segmentation
which combines low-level spatiotemporal features with a high-level segmental
classifier. Our spatiotemporal CNN is comprised of a spatial component that
uses convolutional filters to capture information about objects and their
relationships, and a temporal component that uses large 1D convolutional
filters to capture information about how object relationships change across
time. These features are used in tandem with a semi-Markov model that models
transitions from one action to another. We introduce an efficient constrained
segmental inference algorithm for this model that is orders of magnitude faster
than the current approach. We highlight the effectiveness of our Segmental
Spatiotemporal CNN on cooking and surgical action datasets for which we observe
substantially improved performance relative to recent baseline methods. | [] | [
"Action Classification",
"Action Classification ",
"Action Recognition",
"Action Segmentation",
"Fine-grained Action Recognition",
"Human robot interaction",
"Temporal Action Localization"
] | [] | [
"GTEA"
] | [
"Acc",
"Edit",
"F1@10%",
"F1@25%",
"F1@50%"
] | Segmental Spatiotemporal CNNs for Fine-grained Action Segmentation |
The design of complexity-aware cascaded detectors, combining features of very
different complexities, is considered. A new cascade design procedure is
introduced, by formulating cascade learning as the Lagrangian optimization of a
risk that accounts for both accuracy and complexity. A boosting algorithm,
denoted as complexity aware cascade training (CompACT), is then derived to
solve this optimization. CompACT cascades are shown to seek an optimal
trade-off between accuracy and complexity by pushing features of higher
complexity to the later cascade stages, where only a few difficult candidate
patches remain to be classified. This enables the use of features of vastly
different complexities in a single detector. In result, the feature pool can be
expanded to features previously impractical for cascade design, such as the
responses of a deep convolutional neural network (CNN). This is demonstrated
through the design of a pedestrian detector with a pool of features whose
complexities span orders of magnitude. The resulting cascade generalizes the
combination of a CNN with an object proposal mechanism: rather than a
pre-processing stage, CompACT cascades seamlessly integrate CNNs in their
stages. This enables state of the art performance on the Caltech and KITTI
datasets, at fairly fast speeds. | [] | [
"Pedestrian Detection"
] | [] | [
"Caltech"
] | [
"Reasonable Miss Rate"
] | Learning Complexity-Aware Cascades for Deep Pedestrian Detection |
Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art. | [] | [
"Object Detection",
"Pedestrian Detection"
] | [] | [
"Caltech"
] | [
"Reasonable Miss Rate"
] | Local Decorrelation For Improved Pedestrian Detection |
The perception system in autonomous vehicles is responsible for detecting and tracking the surrounding objects. This is usually done by taking advantage of several sensing modalities to increase robustness and accuracy, which makes sensor fusion a crucial part of the perception system. In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection. Our approach, called CenterFusion, first uses a center point detection network to detect objects by identifying their center points on the image. It then solves the key data association problem using a novel frustum-based method to associate the radar detections to their corresponding object's center point. The associated radar detections are used to generate radar-based feature maps to complement the image features, and regress to object properties such as depth, rotation and velocity. We evaluate CenterFusion on the challenging nuScenes dataset, where it improves the overall nuScenes Detection Score (NDS) of the state-of-the-art camera-based algorithm by more than 12%. We further show that CenterFusion significantly improves the velocity estimation accuracy without using any additional temporal information. The code is available at https://github.com/mrnabati/CenterFusion . | [] | [
"3D Object Detection",
"Autonomous Vehicles",
"Object Detection",
"Sensor Fusion"
] | [] | [
"nuScenes"
] | [
"mAP",
"NDS"
] | CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection |
Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | [] | [
"Image Generation"
] | [] | [
"CIFAR-10"
] | [
"Inception score"
] | Generative Multi-Adversarial Networks |
Studies have shown that a dominant class of questions asked by visually impaired users on images of their surroundings involves reading text in the image. But today's VQA models can not read! Our paper takes a first step towards addressing this problem. First, we introduce a new "TextVQA" dataset to facilitate progress on this important problem. Existing datasets either have a small proportion of questions about text (e.g., the VQA dataset) or are too small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408 images that require reasoning about text to answer. Second, we introduce a novel model architecture that reads text in the image, reasons about it in the context of the image and the question, and predicts an answer which might be a deduction based on the text and the image or composed of the strings found in the image. Consequently, we call our approach Look, Read, Reason & Answer (LoRRA). We show that LoRRA outperforms existing state-of-the-art VQA models on our TextVQA dataset. We find that the gap between human performance and machine performance is significantly larger on TextVQA than on VQA 2.0, suggesting that TextVQA is well-suited to benchmark progress along directions complementary to VQA 2.0. | [] | [
"Visual Question Answering"
] | [] | [
"VizWiz 2018",
"VQA v2 test-dev"
] | [
"overall",
"Accuracy"
] | Towards VQA Models That Can Read |
Human conversation is a complex mechanism with subtle nuances. It is hence an
ambitious goal to develop artificial intelligence agents that can participate
fluently in a conversation. While we are still far from achieving this goal,
recent progress in visual question answering, image captioning, and visual
question generation shows that dialog systems may be realizable in the not too
distant future. To this end, a novel dataset was introduced recently and
encouraging results were demonstrated, particularly for question answering. In
this paper, we demonstrate a simple symmetric discriminative baseline, that can
be applied to both predicting an answer as well as predicting a question. We
show that this method performs on par with the state of the art, even memory
net based methods. In addition, for the first time on the visual dialog
dataset, we assess the performance of a system asking questions, and
demonstrate how visual dialog can be generated from discriminative question
generation and question answering. | [] | [
"Image Captioning",
"Question Answering",
"Question Generation",
"Visual Dialog",
"Visual Question Answering"
] | [] | [
"VisDial v0.9 val"
] | [
"R@10",
"R@5",
"Mean Rank",
"MRR",
"R@1"
] | Two can play this Game: Visual Dialog with Discriminative Question Generation and Answering |
We propose the Temporal Point Cloud Networks (TPCN), a novel and flexible framework with joint spatial and temporal learning for trajectory prediction. Unlike existing approaches that rasterize agents and map information as 2D images or operate in a graph representation, our approach extends ideas from point cloud learning with dynamic temporal learning to capture both spatial and temporal information by splitting trajectory prediction into both spatial and temporal dimensions. In the spatial dimension, agents can be viewed as an unordered point set, and thus it is straightforward to apply point cloud learning techniques to model agents' locations. While the spatial dimension does not take kinematic and motion information into account, we further propose dynamic temporal learning to model agents' motion over time. Experiments on the Argoverse motion forecasting benchmark show that our approach achieves the state-of-the-art results. | [] | [
"Motion Forecasting",
"Trajectory Prediction"
] | [] | [
"Argoverse CVPR 2020"
] | [
"p-minADE (K=6)",
"MR (K=1)",
"DAC (K=6)",
"DAC (K=1)",
"minFDE (K=6)",
"minADE (K=1)",
"MR (K=6)",
"minADE (K=6)",
"minFDE (K=1)",
"p-minFDE (K=6)"
] | TPCN: Temporal Point Cloud Networks for Motion Forecasting |
Deep learning approaches to 3D shape segmentation are typically formulated as
a multi-class labeling problem. Existing models are trained for a fixed set of
labels, which greatly limits their flexibility and adaptivity. We opt for
top-down recursive decomposition and develop the first deep learning model for
hierarchical segmentation of 3D shapes, based on recursive neural networks.
Starting from a full shape represented as a point cloud, our model performs
recursive binary decomposition, where the decomposition network at all nodes in
the hierarchy share weights. At each node, a node classifier is trained to
determine the type (adjacency or symmetry) and stopping criteria of its
decomposition. The features extracted in higher level nodes are recursively
propagated to lower level ones. Thus, the meaningful decompositions in higher
levels provide strong contextual cues constraining the segmentations in lower
levels. Meanwhile, to increase the segmentation accuracy at each node, we
enhance the recursive contextual feature with the shape feature extracted for
the corresponding part. Our method segments a 3D shape in point cloud into an
unfixed number of parts, depending on the shape complexity, showing strong
generality and flexibility. It achieves the state-of-the-art performance, both
for fine-grained and semantic segmentation, on the public benchmark and a new
benchmark of fine-grained segmentation proposed in this work. We also
demonstrate its application for fine-grained part refinements in image-to-shape
reconstruction. | [] | [
"3D Instance Segmentation",
"3D Part Segmentation",
"Semantic Segmentation"
] | [] | [
"ShapeNet-Part",
"S3DIS"
] | [
"Class Average IoU",
"mRec"
] | PartNet: A Recursive Part Decomposition Network for Fine-grained and Hierarchical Shape Segmentation |
Visual dialog (VisDial) is a task which requires an AI agent to answer a series of questions grounded in an image. Unlike in visual question answering (VQA), the series of questions should be able to capture a temporal context from a dialog history and exploit visually-grounded information. A problem called visual reference resolution involves these challenges, requiring the agent to resolve ambiguous references in a given question and find the references in a given image. In this paper, we propose Dual Attention Networks (DAN) for visual reference resolution. DAN consists of two kinds of attention networks, REFER and FIND. Specifically, REFER module learns latent relationships between a given question and a dialog history by employing a self-attention mechanism. FIND module takes image features and reference-aware representations (i.e., the output of REFER module) as input, and performs visual grounding via bottom-up attention mechanism. We qualitatively and quantitatively evaluate our model on VisDial v1.0 and v0.9 datasets, showing that DAN outperforms the previous state-of-the-art model by a significant margin. | [] | [
"Question Answering",
"Visual Dialog",
"Visual Grounding",
"Visual Question Answering"
] | [] | [
"Visual Dialog v1.0 test-std",
"VisDial v0.9 val"
] | [
"MRR (x 100)",
"R@10",
"NDCG (x 100)",
"R@5",
"Mean Rank",
"MRR",
"Mean",
"R@1"
] | Dual Attention Networks for Visual Reference Resolution in Visual Dialog |
Solving grounded language tasks often requires reasoning about relationships between objects in the context of a given task. For example, to answer the question "What color is the mug on the plate?" we must check the color of the specific mug that satisfies the "on" relationship with respect to the plate. Recent work has proposed various methods capable of complex relational reasoning. However, most of their power is in the inference structure, while the scene is represented with simple local appearance features. In this paper, we take an alternate approach and build contextualized representations for objects in a visual scene to support relational reasoning. We propose a general framework of Language-Conditioned Graph Networks (LCGN), where each node represents an object, and is described by a context-aware representation from related objects through iterative message passing conditioned on the textual input. E.g., conditioning on the "on" relationship to the plate, the object "mug" gathers messages from the object "plate" to update its representation to "mug on the plate", which can be easily consumed by a simple classifier for answer prediction. We experimentally show that our LCGN approach effectively supports relational reasoning and improves performance across several tasks and datasets. Our code is available at http://ronghanghu.com/lcgn. | [] | [
"Referring Expression Comprehension",
"Relational Reasoning",
"Visual Question Answering"
] | [] | [
"GQA test-dev",
"GQA test-std",
"CLEVR"
] | [
"Accuracy"
] | Language-Conditioned Graph Networks for Relational Reasoning |
Knowledge graphs (KGs) have become popular structures for unifying real-world entities by modelling the relationships between them and their attributes. Entity alignment -- the task of identifying corresponding entities across different KGs -- has attracted a great deal of attention in both academia and industry. However, existing alignment techniques often require large amounts of labelled data, are unable to encode multi-modal data simultaneously, and enforce only few consistency constraints. In this paper, we propose an end-to-end, unsupervised entity alignment framework for cross-lingual KGs that fuses different types of information in order to fully exploit the richness of KG data. The model captures the relation-based correlation between entities by using a multi-order graph convolutional neural (GCN) model that is designed to satisfy the consistency constraints, while incorporating the attribute-based correlation via a translation machine. We adopt a late-fusion mechanism to combine all the information together, which allows these approaches to complement each other and thus enhances the final alignment result, and makes the model more robust to consistency violations. Empirical results show that our model is more accurate and orders of magnitude faster than existing baselines. We also demonstrate its sensitivity to hyper-parameters, effort saving in terms of labelling, and the robustness against adversarial conditions. | [] | [
"Entity Alignment",
"Knowledge Graphs"
] | [] | [
"DBP15k zh-en",
"dbp15k fr-en",
"dbp15k ja-en"
] | [
"Hits@1"
] | Entity Alignment for Knowledge Graphs with Multi-order Convolutional Networks |
Reasoning about human motion is an important prerequisite to safe and socially-aware robotic navigation. As a result, multi-agent behavior prediction has become a core component of modern human-robot interactive systems, such as self-driving cars. While there exist many methods for trajectory forecasting, most do not enforce dynamic constraints and do not account for environmental information (e.g., maps). Towards this end, we present Trajectron++, a modular, graph-structured recurrent model that forecasts the trajectories of a general number of diverse agents while incorporating agent dynamics and heterogeneous data (e.g., semantic maps). Trajectron++ is designed to be tightly integrated with robotic planning and control frameworks; for example, it can produce predictions that are optionally conditioned on ego-agent motion plans. We demonstrate its performance on several challenging real-world trajectory forecasting datasets, outperforming a wide array of state-of-the-art deterministic and generative methods. | [] | [
"Motion Forecasting",
"Self-Driving Cars",
"Trajectory Forecasting",
"Trajectory Prediction"
] | [] | [
"nuScenes"
] | [
"MinADE_10",
"MissRateTopK_2_10",
"MinADE_5",
"MissRateTopK_2_5",
"MinFDE_1",
"OffRoadRate"
] | Trajectron++: Dynamically-Feasible Trajectory Forecasting With Heterogeneous Data |
Question Answering (QA) systems are used to provide proper responses to users' questions automatically. Sentence matching is an essential task in the QA systems and is usually reformulated as a Paraphrase Identification (PI) problem. Given a question, the aim of the task is to find the most similar question from a QA knowledge base. In this paper, we propose a Multi-task Sentence Encoding Model (MSEM) for the PI problem, wherein a connected graph is employed to depict the relation between sentences, and a multi-task learning model is applied to address both the sentence matching and sentence intent classification problem. In addition, we implement a general semantic retrieval framework that combines our proposed model and the Approximate Nearest Neighbor (ANN) technology, which enables us to find the most similar question from all available candidates very quickly during online serving. The experiments show the superiority of our proposed method as compared with the existing sentence matching models. | [] | [
"Intent Classification",
"Multi-Task Learning",
"Paraphrase Identification",
"Question Answering",
"Semantic Retrieval"
] | [] | [
"Quora Question Pairs"
] | [
"Accuracy"
] | Multi-task Sentence Encoding Model for Semantic Retrieval in Question Answering Systems |
Most multiple people tracking systems compute trajectories based on the tracking-by-detection paradigm. Consequently, the performance depends to a large extent on the quality of the employed input detections. However, despite an enormous progress in recent years, partially occluded people are still often not recognized. Also, many correct detections are mistakenly discarded when the non-maximum suppression is performed. Improving the tracking performance thus requires to augment the coarse input. Wellsuited for this task are fine-graded body joint detections, as they allow to locate even strongly occluded persons.
Thus in this work, we analyze the suitability of including joint detections for multiple people tracking. We introduce different affinities between the two detection types and evaluate their performances. Tracking is then performed within a near-online framework based on a min cost graph labeling formulation. As a result, our framework can recover heavily occluded persons and solve the data association efficiently. We evaluate our framework on the MOT16/17 benchmark. Experimental results demonstrate that our framework achieves state-of-the-art results. | [] | [
"Multi-Object Tracking",
"Multiple People Tracking"
] | [] | [
"MOT17"
] | [
"MOTA"
] | Multiple People Tracking using Body and Joint Detections |
We present a solution to the problem of paraphrase identification of
questions. We focus on a recent dataset of question pairs annotated with binary
paraphrase labels and show that a variant of the decomposable attention model
(Parikh et al., 2016) results in accurate performance on this task, while being
far simpler than many competing neural architectures. Furthermore, when the
model is pretrained on a noisy dataset of automatically collected question
paraphrases, it obtains the best reported performance on the dataset. | [] | [
"Paraphrase Identification"
] | [] | [
"Quora Question Pairs"
] | [
"Accuracy"
] | Neural Paraphrase Identification of Questions with Noisy Pretraining |
Supervised learning techniques are at the center of many tasks in remote sensing. Unfortunately, these methods, especially recent deep learning methods, often require large amounts of labeled data for training. Even though satellites acquire large amounts of data, labeling the data is often tedious, expensive and requires expert knowledge. Hence, improved methods that require fewer labeled samples are needed. We present MSMatch, the first semi-supervised learning approach competitive with supervised methods on scene classification on the EuroSAT benchmark dataset. We test both RGB and multispectral images and perform various ablation studies to identify the critical parts of the model. The trained neural network achieves state-of-the-art results on EuroSAT with an accuracy that is between 1.98% and 19.76% better than previous methods depending on the number of labeled training examples. With just five labeled examples per class we reach 94.53% and 95.86% accuracy on the EuroSAT RGB and multispectral datasets, respectively. With 50 labels per class we reach 97.62% and 98.23% accuracy. Our results show that MSMatch is capable of greatly reducing the requirements for labeled data. It translates well to multispectral data and should enable various applications that are currently infeasible due to a lack of labeled data. We provide the source code of MSMatch online to enable easy reproduction and quick adoption. | [] | [] | [] | [
"EuroSAT"
] | [
"Accuracy (%)"
] | MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels |
Recently, deep neural networks (DNNs) have been successfully used for speech enhancement, and DNN-based speech enhancement is becoming an attractive research area. While time-frequency masking based on the short-time Fourier transform (STFT) has been widely used for DNN-based speech enhancement over the last years, time domain methods such as the time-domain audio separation network (TasNet) have also been proposed. The most suitable method depends on the scale of the dataset and the type of task. In this paper, we explore the best speech enhancement algorithm on two different datasets. We propose a STFT-based method and a loss function using problem-agnostic speech encoder (PASE) features to improve subjective quality for the smaller dataset. Our proposed methods are effective on the Voice Bank + DEMAND dataset and compare favorably to other state-of-the-art methods. We also implement a low-latency version of TasNet, which we submitted to the DSN Challenge and made public by open-sourcing it. Our model achieves excellent performance on the DNS Challenge dataset. | [] | [
"Speech Dereverberation",
"Speech Enhancement"
] | [] | [
"Deep Noise Suppression (DNS) Challenge"
] | [
"ΔPESQ",
"PESQ-WB",
"PESQ"
] | Exploring the Best Loss Function for DNN-Based Low-latency Speech Enhancement with Temporal Convolutional Networks |
We have created a large diverse set of cars from overhead images, which are
useful for training a deep learner to binary classify, detect and count them.
The dataset and all related material will be made publically available. The set
contains contextual matter to aid in identification of difficult targets. We
demonstrate classification and detection on this dataset using a neural network
we call ResCeption. This network combines residual learning with
Inception-style layers and is used to count cars in one look. This is a new way
to count objects rather than by localization or density estimation. It is
fairly accurate, fast and easy to implement. Additionally, the counting method
is not car or scene specific. It would be easy to train this method to count
other kinds of objects and counting over new scenes requires no extra set up or
assumptions about object locations. | [] | [
"Density Estimation"
] | [] | [
"CARPK"
] | [
"MAE",
"RMSE"
] | A Large Contextual Dataset for Classification, Detection and Counting of Cars with Deep Learning |
Semantic part localization can facilitate fine-grained categorization by
explicitly isolating subtle appearance differences associated with specific
object parts. Methods for pose-normalized representations have been proposed,
but generally presume bounding box annotations at test time due to the
difficulty of object detection. We propose a model for fine-grained
categorization that overcomes these limitations by leveraging deep
convolutional features computed on bottom-up region proposals. Our method
learns whole-object and part detectors, enforces learned geometric constraints
between them, and predicts a fine-grained category from a pose-normalized
representation. Experiments on the Caltech-UCSD bird dataset confirm that our
method outperforms state-of-the-art fine-grained categorization methods in an
end-to-end evaluation without requiring a bounding box at test time. | [] | [
"Fine-Grained Image Classification",
"Object Detection"
] | [] | [
" CUB-200-2011"
] | [
"Accuracy"
] | Part-based R-CNNs for Fine-grained Category Detection |
We propose a novel model to address the task of Visual Dialog which exhibits complex dialog structures. To obtain a reasonable answer based on the current question and the dialog history, the underlying semantic dependencies between dialog entities are essential. In this paper, we explicitly formalize this task as inference in a graphical model with partially observed nodes and unknown graph structures (relations in dialog). The given dialog entities are viewed as the observed nodes. The answer to a given question is represented by a node with missing value. We first introduce an Expectation Maximization algorithm to infer both the underlying dialog structures and the missing node values (desired answers). Based on this, we proceed to propose a differentiable graph neural network (GNN) solution that approximates this process. Experiment results on the VisDial and VisDial-Q datasets show that our model outperforms comparative methods. It is also observed that our method can infer the underlying dialog structure for better dialog reasoning. | [] | [
"Visual Dialog"
] | [] | [
"Visual Dialog v1.0 test-std",
"VisDial v0.9 val"
] | [
"MRR (x 100)",
"R@10",
"NDCG (x 100)",
"R@5",
"Mean Rank",
"MRR",
"Mean",
"R@1"
] | Reasoning Visual Dialogs with Structural and Partial Observations |
Deep neural networks have been exhibiting splendid accuracies in many of
visual pattern classification problems. Many of the state-of-the-art methods
employ a technique known as data augmentation at the training stage. This paper
addresses an issue of decision rule for classifiers trained with augmented
data. Our method is named as APAC: the Augmented PAttern Classification, which
is a way of classification using the optimal decision rule for augmented data
learning. Discussion of methods of data augmentation is not our primary focus.
We show clear evidences that APAC gives far better generalization performance
than the traditional way of class prediction in several experiments. Our
convolutional neural network model with APAC achieved a state-of-the-art
accuracy on the MNIST dataset among non-ensemble classifiers. Even our
multilayer perceptron model beats some of the convolutional models with
recently invented stochastic regularization techniques on the CIFAR-10 dataset. | [] | [
"Data Augmentation",
"Image Classification"
] | [] | [
"MNIST",
"CIFAR-10"
] | [
"Percentage error",
"Percentage correct"
] | APAC: Augmented PAttern Classification with Neural Networks |
In recent years, various shadow detection methods from a single image have
been proposed and used in vision systems; however, most of them are not
appropriate for the robotic applications due to the expensive time complexity.
This paper introduces a fast shadow detection method using a deep learning
framework, with a time cost that is appropriate for robotic applications. In
our solution, we first obtain a shadow prior map with the help of multi-class
support vector machine using statistical features. Then, we use a semantic-
aware patch-level Convolutional Neural Network that efficiently trains on
shadow examples by combining the original image and the shadow prior map.
Experiments on benchmark datasets demonstrate the proposed method significantly
decreases the time complexity of shadow detection, by one or two orders of
magnitude compared with state-of-the-art methods, without losing accuracy. | [] | [
"Shadow Detection"
] | [] | [
"SBU"
] | [
"BER"
] | Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network |
An important goal in visual recognition is to devise image representations
that are invariant to particular transformations. In this paper, we address
this goal with a new type of convolutional neural network (CNN) whose
invariance is encoded by a reproducing kernel. Unlike traditional approaches
where neural networks are learned either to represent data or for solving a
classification task, our network learns to approximate the kernel feature map
on training data. Such an approach enjoys several benefits over classical ones.
First, by teaching CNNs to be invariant, we obtain simple network architectures
that achieve a similar accuracy to more complex ones, while being easy to train
and robust to overfitting. Second, we bridge a gap between the neural network
literature and kernels, which are natural tools to model invariance. We
evaluate our methodology on visual recognition tasks where CNNs have proven to
perform well, e.g., digit recognition with the MNIST dataset, and the more
challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive
with the state of the art. | [] | [
"Image Classification"
] | [] | [
"STL-10",
"MNIST",
"CIFAR-10"
] | [
"Percentage error",
"Percentage correct"
] | Convolutional Kernel Networks |
In this paper, we investigate the problem of learning feature representation
from unlabeled data using a single-layer K-means network. A K-means network
maps the input data into a feature representation by finding the nearest
centroid for each input point, which has attracted researchers' great attention
recently due to its simplicity, effectiveness, and scalability. However, one
drawback of this feature mapping is that it tends to be unreliable when the
training data contains noise. To address this issue, we propose a SVDD based
feature learning algorithm that describes the density and distribution of each
cluster from K-means with an SVDD ball for more robust feature representation.
For this purpose, we present a new SVDD algorithm called C-SVDD that centers
the SVDD ball towards the mode of local density of each cluster, and we show
that the objective of C-SVDD can be solved very efficiently as a linear
programming problem. Additionally, traditional unsupervised feature learning
methods usually take an average or sum of local representations to obtain
global representation which ignore spatial relationship among them. To use
spatial information we propose a global representation with a variant of SIFT
descriptor. The architecture is also extended with multiple receptive field
scales and multiple pooling sizes. Extensive experiments on several popular
object recognition benchmarks, such as STL-10, MINST, Holiday and Copydays
shows that the proposed C-SVDDNet method yields comparable or better
performance than that of the previous state of the art methods. | [] | [
"Image Classification",
"Object Recognition"
] | [] | [
"MNIST",
"STL-10"
] | [
"Percentage error",
"Percentage correct"
] | Unsupervised Feature Learning with C-SVDDNet |
Methods for unconstrained face alignment must satisfy two requirements: they must not rely on accurate initialisation/face detection and they should perform equally well for the whole spectrum of facial poses. To the best of our knowledge, there are no methods meeting these requirements to satisfactory extent, and in this paper, we propose Convolutional Aggregation of Local Evidence (CALE), a Convolutional Neural Network (CNN) architecture particularly designed for addressing both of them. In particular, to remove the requirement for accurate face detection, our system firstly performs facial part detection, providing confidence scores for the location of each of the facial landmarks (local evidence). Next, these score maps along with early CNN features are aggregated by our system through joint regression in order to refine the landmarks’ location. Besides playing the role of a graphical model, CNN regression is a key feature of our system, guiding the network to rely on context for predicting the location of occluded landmarks, typically encountered in very large poses. The whole system is trained end-to-end with intermediate supervision. When applied to AFLW-PIFA, the most challenging human face alignment test set to date, our method provides more than 50% gain in localisation accuracy when compared to other recently published methods for large pose face alignment. Going beyond human faces, we also demonstrate that CALE is effective in dealing with very large changes in shape and appearance, typically encountered in animal faces. | [] | [
"Face Alignment",
"Face Detection",
"Regression"
] | [] | [
"AFLW-PIFA (34 points)",
"AFLW-PIFA (21 points)"
] | [
"NME"
] | Convolutional aggregation of local evidence for large pose face alignment |
Sparseness is a useful regularizer for learning in a wide range of
applications, in particular in neural networks. This paper proposes a model
targeted at classification tasks, where sparse activity and sparse connectivity
are used to enhance classification capabilities. The tool for achieving this is
a sparseness-enforcing projection operator which finds the closest vector with
a pre-defined sparseness for any given vector. In the theoretical part of this
paper, a comprehensive theory for such a projection is developed. In
conclusion, it is shown that the projection is differentiable almost everywhere
and can thus be implemented as a smooth neuronal transfer function. The entire
model can hence be tuned end-to-end using gradient-based methods. Experiments
on the MNIST database of handwritten digits show that classification
performance can be boosted by sparse activity or sparse connectivity. With a
combination of both, performance can be significantly better compared to
classical non-sparse approaches. | [] | [
"Image Classification"
] | [] | [
"MNIST"
] | [
"Percentage error"
] | Sparse Activity and Sparse Connectivity in Supervised Learning |
This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs. First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes. Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs. We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process. Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently. We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art. Our source code is available under https://github.com/rusty1s/ deep-graph-matching-consensus. | [] | [
"Entity Alignment",
"Graph Matching",
"Knowledge Graphs"
] | [] | [
"DBP15k zh-en"
] | [
"Hits@1"
] | Deep Graph Matching Consensus |
SegBlocks reduces the computational cost of existing neural networks, by dynamically adjusting the processing resolution of image regions based on their complexity. Our method splits an image into blocks and downsamples blocks of low complexity, reducing the number of operations and memory consumption. A lightweight policy network, selecting the complex regions, is trained using reinforcement learning. In addition, we introduce several modules implemented in CUDA to process images in blocks. Most important, our novel BlockPad module prevents the feature discontinuities at block borders of which existing methods suffer, while keeping memory consumption under control. Our experiments on Cityscapes and Mapillary Vistas semantic segmentation show that dynamically processing images offers a better accuracy versus complexity trade-off compared to static baselines of similar complexity. For instance, our method reduces the number of floating-point operations of SwiftNet-RN18 by 60% and increases the inference speed by 50%, with only 0.3% decrease in mIoU accuracy on Cityscapes. | [] | [
"Real-Time Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"Mapillary val",
"Cityscapes test"
] | [
"Frame (fps)",
"mIoU"
] | SegBlocks: Block-Based Dynamic Resolution Networks for Real-Time Segmentation |
In this paper, we study the problem of semantic annotation on 3D models that
are represented as shape graphs. A functional view is taken to represent
localized information on graphs, so that annotations such as part segment or
keypoint are nothing but 0-1 indicator vertex functions. Compared with images
that are 2D grids, shape graphs are irregular and non-isomorphic data
structures. To enable the prediction of vertex functions on them by
convolutional neural networks, we resort to spectral CNN method that enables
weight sharing by parameterizing kernels in the spectral domain spanned by
graph laplacian eigenbases. Under this setting, our network, named SyncSpecCNN,
strive to overcome two key challenges: how to share coefficients and conduct
multi-scale analysis in different parts of the graph for a single shape, and
how to share information across related but different shapes that may be
represented by very different graphs. Towards these goals, we introduce a
spectral parameterization of dilated convolutional kernels and a spectral
transformer network. Experimentally we tested our SyncSpecCNN on various tasks,
including 3D shape part segmentation and 3D keypoint prediction.
State-of-the-art performance has been achieved on all benchmark datasets. | [] | [
"3D Part Segmentation"
] | [] | [
"ShapeNet-Part"
] | [
"Class Average IoU",
"Instance Average IoU"
] | SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation |
Modern neural networks have the capacity to overfit noisy labels frequently found in real-world datasets. Although great progress has been made, existing techniques are limited in providing theoretical guarantees for the performance of the neural networks trained with noisy labels. Here we propose a novel approach with strong theoretical guarantees for robust training of deep networks trained with noisy labels. The key idea behind our method is to select weighted subsets (coresets) of clean data points that provide an approximately low-rank Jacobian matrix. We then prove that gradient descent applied to the subsets do not overfit the noisy labels. Our extensive experiments corroborate our theory and demonstrate that deep networks trained on our subsets achieve a significantly superior performance compared to state-of-the art, e.g., 6% increase in accuracy on CIFAR-10 with 80% noisy labels, and 7% increase in accuracy on mini Webvision. | [] | [] | [] | [
"mini WebVision 1.0"
] | [
"Top-5 Accuracy",
"ImageNet Top-1 Accuracy",
"ImageNet Top-5 Accuracy",
"Top-1 Accuracy"
] | Coresets for Robust Training of Neural Networks against Noisy Labels |
Disambiguating named entities in naturallanguage text maps mentions of ambiguous names onto canonical entities like people or places, registered in a knowledge base such as DBpedia or YAGO. This paper presents a robust method for collective disambiguation, by harnessing context from knowledge bases and using a new form of coherence graph. It unifies prior approaches into a comprehensive framework that combines three measures: the prior probability of an entity being mentioned, the similarity between the contexts of a mention and a candidate entity, as well as the coherence among candidate entities for all mentions together. The method builds a weighted graph of mentions and candidate entities, and computes a dense subgraph that approximates the best joint mention-entity mapping. Experiments show that the new method significantly outperforms prior methods in terms of accuracy, with robust behavior across a variety of inputs. | [] | [
"Entity Disambiguation",
"Entity Linking"
] | [] | [
"AIDA-CoNLL"
] | [
"Micro-F1 strong",
"Macro-F1 strong",
"In-KB Accuracy"
] | Robust Disambiguation of Named Entities in Text |
Character-based neural models have recently proven very useful for many NLP
tasks. However, there is a gap of sophistication between methods for learning
representations of sentences and words. While most character models for
learning representations of sentences are deep and complex, models for learning
representations of words are shallow and simple. Also, in spite of considerable
research on learning character embeddings, it is still not clear which kind of
architecture is the best for capturing character-to-word representations. To
address these questions, we first investigate the gaps between methods for
learning word and sentence representations. We conduct detailed experiments and
comparisons of different state-of-the-art convolutional models, and also
investigate the advantages and disadvantages of their constituents.
Furthermore, we propose IntNet, a funnel-shaped wide convolutional neural
architecture with no down-sampling for learning representations of the internal
structure of words by composing their characters from limited, supervised
training corpora. We evaluate our proposed model on six sequence labeling
datasets, including named entity recognition, part-of-speech tagging, and
syntactic chunking. Our in-depth analysis shows that IntNet significantly
outperforms other character embedding models and obtains new state-of-the-art
performance without relying on any external knowledge or resources. | [] | [
"Chunking",
"Named Entity Recognition",
"Part-Of-Speech Tagging"
] | [] | [
"CoNLL 2003 (English)",
"Penn Treebank"
] | [
"F1",
"F1 score",
"Accuracy"
] | Learning Better Internal Structure of Words for Sequence Labeling |
One of the most important factors which directly and significantly affects the quality of the neural sequence labeling is the selection and encoding the input features to generate rich semantic and grammatical representation vectors. In this paper, we propose a deep neural network model to address a particular task of sequence labeling problem, the task of Named Entity Recognition (NER). The model consists of three sub-networks to fully exploit character-level and capitalization features as well as word-level contextual representation. To show the ability of our model to generalize to different languages, we evaluated the model in Russian, Vietnamese, English and Chinese and obtained state-of-the-art performances: 91.10%, 94.43%, 91.22%, 92.95% of F-Measure on Gareev's dataset, VLSP-2016, CoNLL-2003 and MSRA datasets, respectively. Besides that, our model also obtained a good performance (about 70% of F1) with using only 100 samples for training and development sets. | [] | [
"Named Entity Recognition",
"Named Entity Recognition In Vietnamese"
] | [] | [
"CoNLL 2003 (English)",
"VLSP-2016"
] | [
"F1"
] | A Deep Neural Network Model for the Task of Named Entity Recognition |
Recent research on the time-domain audio separation networks (TasNets) has brought great success to speech separation. Nevertheless, conventional TasNets struggle to satisfy the memory and latency constraints in industrial applications. In this regard, we design a low-cost high-performance architecture, namely, globally attentive locally recurrent (GALR) network. Alike the dual-path RNN (DPRNN), we first split a feature sequence into 2D segments and then process the sequence along both the intra- and inter-segment dimensions. Our main innovation lies in that, on top of features recurrently processed along the inter-segment dimensions, GALR applies a self-attention mechanism to the sequence along the inter-segment dimension, which aggregates context-aware information and also enables parallelization. Our experiments suggest that GALR is a notably more effective network than the prior work. On one hand, with only 1.5M parameters, it has achieved comparable separation performance at a much lower cost with 36.1% less runtime memory and 49.4% fewer computational operations, relative to the DPRNN. On the other hand, in a comparable model size with DPRNN, GALR has consistently outperformed DPRNN in three datasets, in particular, with a substantial margin of 2.4dB absolute improvement of SI-SNRi in the benchmark WSJ0-2mix task. | [] | [
"Speech Separation"
] | [] | [
"wsj0-2mix"
] | [
"SI-SDRi"
] | Effective Low-Cost Time-Domain Audio Separation Using Globally Attentive Locally Recurrent Networks |
We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities). Our model is formally a structured conditional random field. Unary factors encode local features from strong baselines for each task. We then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. On the ACE 2005 and OntoNotes datasets, we achieve state-of-the-art results for all three tasks. Moreover, joint modeling improves performance on each task over strong independent baselines. | [] | [
"Coreference Resolution",
"Entity Linking",
"Named Entity Recognition"
] | [] | [
"Ontonotes v5 (English)"
] | [
"F1"
] | A Joint Model for Entity Analysis: Coreference, Typing, and Linking |
Most video-based action recognition approaches choose to extract features from the whole video to recognize actions. The cluttered background and non-action motions limit the performances of these methods, since they lack the explicit modeling of human body movements. With recent advances of human pose estimation, this work presents a novel method to recognize human action as the evolution of pose estimation maps. Instead of relying on the inaccurate human poses estimated from videos, we observe that pose estimation maps, the byproduct of pose estimation, preserve richer cues of human body to benefit action recognition. Specifically, the evolution of pose estimation maps can be decomposed as an evolution of heatmaps, e.g., probabilistic maps, and an evolution of estimated 2D human poses, which denote the changes of body shape and body pose, respectively. Considering the sparse property of heatmap, we develop spatial rank pooling to aggregate the evolution of heatmaps as a body shape evolution image. As body shape evolution image does not differentiate body parts, we design body guided sampling to aggregate the evolution of poses as a body pose evolution image. The complementary properties between both types of images are explored by deep convolutional neural networks to predict action label. Experiments on NTU RGB+D, UTD-MHAD and PennAction datasets verify the effectiveness of our method, which outperforms most state-of-the-art methods. | [] | [
"Action Recognition",
"Multimodal Activity Recognition",
"Pose Estimation",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D",
"UTD-MHAD",
"NTU RGB+D 120"
] | [
"Accuracy (Cross-Subject)",
"Accuracy (Cross-Setup)",
"Accuracy (CV)",
"Accuracy (CS)"
] | Recognizing Human Actions as the Evolution of Pose Estimation Maps |
Highly regularized LSTMs achieve impressive results on several benchmark
datasets in language modeling. We propose a new regularization method based on
decoding the last token in the context using the predicted distribution of the
next token. This biases the model towards retaining more contextual
information, in turn improving its ability to predict the next token. With
negligible overhead in the number of parameters and training time, our Past
Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on
the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax.
We also show gains by using PDR in combination with a mixture-of-softmaxes,
achieving a word level perplexity of 53.8 and 60.5 on these datasets. In
addition, our method achieves 1.169 bits-per-character on the Penn Treebank
Character dataset for character level language modeling. These results
constitute a new state-of-the-art in their respective settings. | [] | [
"Language Modelling"
] | [] | [
"Penn Treebank (Word Level)",
"WikiText-2",
"Penn Treebank (Character Level)"
] | [
"Number of params",
"Bit per Character (BPC)",
"Validation perplexity",
"Test perplexity",
"Params"
] | Improved Language Modeling by Decoding the Past |
Single modality action recognition on RGB or depth sequences has been
extensively explored recently. It is generally accepted that each of these two
modalities has different strengths and limitations for the task of action
recognition. Therefore, analysis of the RGB+D videos can help us to better
study the complementary properties of these two types of modalities and achieve
higher levels of performance. In this paper, we propose a new deep autoencoder
based shared-specific feature factorization network to separate input
multimodal signals into a hierarchy of components. Further, based on the
structure of the features, a structured sparsity learning machine is proposed
which utilizes mixed norms to apply regularization within components and group
selection between them for better classification performance. Our experimental
results show the effectiveness of our cross-modality feature analysis framework
by achieving state-of-the-art accuracy for action classification on five
challenging benchmark datasets. | [] | [
"Action Classification",
"Action Classification ",
"Action Recognition",
"Multimodal Activity Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D",
"MSR Daily Activity3D dataset"
] | [
"Accuracy (CS)",
"Accuracy"
] | Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos |
This paper presents a deep learning based approach to the problem of human
pose estimation. We employ generative adversarial networks as our learning
paradigm in which we set up two stacked hourglass networks with the same
architecture, one as the generator and the other as the discriminator. The
generator is used as a human pose estimator after the training is done. The
discriminator distinguishes ground-truth heatmaps from generated ones, and
back-propagates the adversarial loss to the generator. This process enables the
generator to learn plausible human body configurations and is shown to be
useful for improving the prediction accuracy. | [] | [
"Pose Estimation"
] | [] | [
"MPII Human Pose"
] | [
"PCKh-0.5"
] | Self Adversarial Training for Human Pose Estimation |
Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112-times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8-states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. The official GitHub repository: https://github.com/agemagician/ProtTrans | [] | [
"Dimensionality Reduction",
"Protein Secondary Structure Prediction"
] | [] | [
"CASP12",
"CB513",
"TS115"
] | [
"Q8",
"Q3"
] | ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing |
In this paper, we systematically analyze the connecting architectures of
recurrent neural networks (RNNs). Our main contribution is twofold: first, we
present a rigorous graph-theoretic framework describing the connecting
architectures of RNNs in general. Second, we propose three architecture
complexity measures of RNNs: (a) the recurrent depth, which captures the RNN's
over-time nonlinear complexity, (b) the feedforward depth, which captures the
local input-output nonlinearity (similar to the "depth" in feedforward neural
networks (FNNs)), and (c) the recurrent skip coefficient which captures how
rapidly the information propagates over time. We rigorously prove each
measure's existence and computability. Our experimental results show that RNNs
might benefit from larger recurrent depth and feedforward depth. We further
demonstrate that increasing recurrent skip coefficient offers performance
boosts on long term dependency problems. | [] | [
"Language Modelling"
] | [] | [
"Text8"
] | [
"Bit per Character (BPC)"
] | Architectural Complexity Measures of Recurrent Neural Networks |
We present a simple and effective blind image deblurring method based on the dark channel prior. Our work is inspired by the interesting observation that the dark channel of blurred images is less sparse. While most image patches in the clean image contain some dark pixels, these pixels are not dark when averaged with neighboring high-intensity pixels during the blur process.Our analysis shows that this change in the sparsity of the dark channel is an inherent property of the blur process, both theoretically and empirically. This change in the sparsity of the dark channel is an inherent property of the blur process, which we both prove mathematically and validate using training data. Therefore, enforcing the sparsity of the dark channel helps blind deblurring on various scenarios, including natural, face, text, and low-illumination images. However, sparsity of the dark channel introduces a non-convex non-linear optimization problem. We introduce a linear approximation of the min operator to compute the dark channel. Our look-up-table-based method converges fast in practice and can be directly extended to non-uniform deblurring. Extensive experiments show that our method achieves state-of-the-art results on deblurring natural images and compares favorably methods that are well-engineered for specific scenarios. | [] | [
"Blind Image Deblurring",
"Deblurring"
] | [] | [
"RealBlur-J (trained on GoPro)",
"RealBlur-R (trained on GoPro)"
] | [
"SSIM (sRGB)",
"PSNR (sRGB)"
] | Blind Image Deblurring Using Dark Channel Prior |
Images taken in low-light conditions with handheld cameras are often blurry due to the required long exposure time. Although significant progress has been made recently on image deblurring, state-of-the-art approaches often fail on low-light images, as these images do not contain a sufficient number of salient features that deblurring methods rely on. On the other hand, light streaks are common phenomena in low-light images that contain rich blur information, but have not been extensively explored in previous approaches. In this work, we propose a new method that utilizes light streaks to help deblur low-light images. We introduce a non-linear blur model that explicitly models light streaks and their underlying light sources, and poses them as constraints for estimating the blur kernel in an optimization framework. Our method also automatically detects useful light streaks in the input image. Experimental results show that our approach obtains good results on challenging real-world examples that no other methods could achieve before. | [] | [
"Deblurring"
] | [] | [
"RealBlur-J (trained on GoPro)",
"RealBlur-R (trained on GoPro)"
] | [
"SSIM (sRGB)",
"PSNR (sRGB)"
] | Deblurring Low-light Images with Light Streaks |
Electroencephalograph (EEG) emotion recognition is a significant task in the brain-computer interface field. Although many deep learning methods are proposed recently, it is still challenging to make full use of the information contained in different domains of EEG signals. In this paper, we present a novel method, called four-dimensional attention-based neural network (4D-aNN) for EEG emotion recognition. First, raw EEG signals are transformed into 4D spatial-spectral-temporal representations. Then, the proposed 4D-aNN adopts spectral and spatial attention mechanisms to adaptively assign the weights of different brain regions and frequency bands, and a convolutional neural network (CNN) is utilized to deal with the spectral and spatial information of the 4D representations. Moreover, a temporal attention mechanism is integrated into a bidirectional Long Short-Term Memory (LSTM) to explore temporal dependencies of the 4D representations. Our model achieves state-of-the-art performance on the SEED dataset under intra-subject splitting. The experimental results have shown the effectiveness of the attention mechanisms in different domains for EEG emotion recognition. | [] | [
"EEG",
"Emotion Recognition"
] | [] | [
"SEED"
] | [
"Accuracy"
] | 4D Attention-based Neural Network for EEG Emotion Recognition |
Pre-trained language models have proven their unique powers in capturing implicit language features. However, most pre-training approaches focus on the word-level training objective, while sentence-level objectives are rarely studied. In this paper, we propose Contrastive LEArning for sentence Representation (CLEAR), which employs multiple sentence-level augmentation strategies in order to learn a noise-invariant sentence representation. These augmentations include word and span deletion, reordering, and substitution. Furthermore, we investigate the key reasons that make contrastive learning effective through numerous experiments. We observe that different sentence augmentations during pre-training lead to different performance improvements on various downstream tasks. Our approach is shown to outperform multiple existing methods on both SentEval and GLUE benchmarks. | [] | [
"Linguistic Acceptability",
"Natural Language Inference",
"Question Answering",
"Semantic Textual Similarity",
"Sentiment Analysis"
] | [] | [
"MultiNLI",
"SST-2 Binary classification",
"RTE",
"MRPC",
"STS Benchmark",
"CoLA",
"QNLI",
"Quora Question Pairs"
] | [
"Pearson Correlation",
"Accuracy"
] | CLEAR: Contrastive Learning for Sentence Representation |
Wearable devices that acquire photoplethysmographic (PPG) signals are becoming increasingly popular to monitor the heart rate during physical exercise. However, high accuracy and low computational complexity are conflicting requirements. We propose a method that provides highly accurate heart rate estimates at a very low computational cost in order to be implementable on wearables. To achieve the lowest possible complexity, only basic signal processing operations, i.e., correlation-based fundamental frequency estimation and spectral combination, harmonic noise damping and frequency domain tracking, are used. The proposed approach outperforms state-of-the-art methods on current benchmark data considerably in terms of computation time, while achieving a similar accuracy. | [] | [
"Heart rate estimation",
"Photoplethysmography (PPG)"
] | [] | [
"PPG-DaLiA",
"WESAD"
] | [
"MAE [bpm, session-wise]"
] | Computationally efficient heart rate estimation during physical exercise using photoplethysmographic signals |
In this work we propose a novel deep-learning approach for age estimation based on face images. We first introduce a dual image augmentation-aggregation approach based on attention. This allows the network to jointly utilize multiple face image augmentations whose embeddings are aggregated by a Transformer-Encoder. The resulting aggregated embedding is shown to better encode the face image attributes. We then propose a probabilistic hierarchical regression framework that combines a discrete probabilistic estimate of age labels, with a corresponding ensemble of regressors. Each regressor is particularly adapted and trained to refine the probabilistic estimate over a range of ages. Our scheme is shown to outperform contemporary schemes and provide a new state-of-the-art age estimation accuracy, when applied to the MORPH II dataset for age estimation. Last, we introduce a bias analysis of state-of-the-art age estimation results. | [] | [] | [] | [
"MORPH Album2 (RS)",
"MORPH Album2 (SE)"
] | [
"MAE"
] | Hierarchical Attention-based Age Estimation and Bias Estimation |
Facial Expression Recognition (FER) is a classification task that points to face variants. Hence, there are certain intimate relationships between facial expressions. We call them affinity features, which are barely taken into account by current FER algorithms. Besides, to capture the edge information of the image, Convolutional Neural Networks (CNNs) generally utilize a host of edge paddings. Although they are desirable, the feature map is deeply eroded after multi-layer convolution. We name what has formed in this process the albino features, which definitely weaken the representation of the expression. To tackle these challenges, we propose a novel architecture named Amend Representation Module (ARM). ARM is a substitute for the pooling layer. Theoretically, it could be embedded in any CNN with a pooling layer. ARM efficiently enhances facial expression representation from two different directions: 1) reducing the weight of eroded features to offset the side effect of padding, and 2) sharing affinity features over mini-batch to strengthen the representation learning. In terms of data imbalance, we designed a minimal random resampling (MRR) scheme to suppress network overfitting. Experiments on public benchmarks prove that our ARM boosts the performance of FER remarkably. The validation accuracies are respectively 90.55% on RAF-DB, 64.49% on Affect-Net, and 71.38% on FER2013, exceeding current state-of-the-art methods. | [] | [] | [] | [
"RAF-DB",
"AffectNet",
"FER2013"
] | [
"Overall Accuracy",
"Accuracy (8 emotion)",
"Accuracy (7 emotion)",
"Avg. Accuracy",
"Accuracy"
] | Learning to Amend Facial Expression Representation via De-albino and Affinity |
Neural architecture search (NAS) has witnessed prevailing success in image classification and (very recently) segmentation tasks. In this paper, we present the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN. The marriage of NAS and GANs faces its unique challenges. We define the search space for the generator architectural variations and use an RNN controller to guide the search, with parameter sharing and dynamic-resetting to accelerate the process. Inception score is adopted as the reward, and a multi-level search strategy is introduced to perform NAS in a progressive way. Experiments validate the effectiveness of AutoGAN on the task of unconditional image generation. Specifically, our discovered architectures achieve highly competitive performance compared to current state-of-the-art hand-crafted GANs, e.g., setting new state-of-the-art FID scores of 12.42 on CIFAR-10, and 31.01 on STL-10, respectively. We also conclude with a discussion of the current limitations and future potential of AutoGAN. The code is available at https://github.com/TAMU-VITA/AutoGAN | [] | [
"Image Classification",
"Image Generation",
"Neural Architecture Search"
] | [] | [
"STL-10",
"CIFAR-10"
] | [
"Inception score",
"FID"
] | AutoGAN: Neural Architecture Search for Generative Adversarial Networks |
In interactive object segmentation a user collaborates with a computer vision model to segment an object. Recent works employ convolutional neural networks for this task: Given an image and a set of corrections made by the user as input, they output a segmentation mask. These approaches achieve strong performance by training on large datasets but they keep the model parameters unchanged at test time. Instead, we recognize that user corrections can serve as sparse training examples and we propose a method that capitalizes on that idea to update the model parameters on-the-fly to the data at hand. Our approach enables the adaptation to a particular object and its background, to distributions shifts in a test set, to specific object classes, and even to large domain changes, where the imaging modality changes between training and testing. We perform extensive experiments on 8 diverse datasets and show: Compared to a model with frozen parameters, our method reduces the required corrections (i) by 9%-30% when distribution shifts are small between training and testing; (ii) by 12%-44% when specializing to a specific class; (iii) and by 60% and 77% when we completely change domain between training and testing. | [] | [
"Interactive Segmentation",
"Semantic Segmentation"
] | [] | [
"DRIONS-DB",
"Rooftop",
"Berkeley",
"DAVIS",
"GrabCut"
] | [
"NoC@90",
"NoC@85",
"NoC@80"
] | Continuous Adaptation for Interactive Object Segmentation by Learning from Corrections |
Temporal action proposal generation is an important task, aiming to localize
the video segments containing human actions in an untrimmed video. In this
paper, we propose a multi-granularity generator (MGG) to perform the temporal
action proposal from different granularity perspectives, relying on the video
visual features equipped with the position embedding information. First, we
propose to use a bilinear matching model to exploit the rich local information
within the video sequence. Afterwards, two components, namely segment proposal
producer (SPP) and frame actionness producer (FAP), are combined to perform the
task of temporal action proposal at two distinct granularities. SPP considers
the whole video in the form of feature pyramid and generates segment proposals
from one coarse perspective, while FAP carries out a finer actionness
evaluation for each video frame. Our proposed MGG can be trained in an
end-to-end fashion. By temporally adjusting the segment proposals with
fine-grained frame actionness information, MGG achieves the superior
performance over state-of-the-art methods on the public THUMOS-14 and
ActivityNet-1.3 datasets. Moreover, we employ existing action classifiers to
perform the classification of the proposals generated by MGG, leading to
significant improvements compared against the competing methods for the video
detection task. | [] | [
"Action Recognition",
"Temporal Action Proposal Generation"
] | [] | [
"ActivityNet-1.3",
"THUMOS’14"
] | [
"mAP@0.3",
"AUC (val)",
"mAP@0.4",
"mAP@0.5",
"AR@100"
] | Multi-granularity Generator for Temporal Action Proposal |
We present a Temporal Context Network (TCN) for precise temporal localization
of human activities. Similar to the Faster-RCNN architecture, proposals are
placed at equal intervals in a video which span multiple temporal scales. We
propose a novel representation for ranking these proposals. Since pooling
features only inside a segment is not sufficient to predict activity
boundaries, we construct a representation which explicitly captures context
around a proposal for ranking it. For each temporal segment inside a proposal,
features are uniformly sampled at a pair of scales and are input to a temporal
convolutional neural network for classification. After ranking proposals,
non-maximum suppression is applied and classification is performed to obtain
final detections. TCN outperforms state-of-the-art methods on the ActivityNet
dataset and the THUMOS14 dataset. | [] | [
"Temporal Localization"
] | [] | [
"THUMOS’14"
] | [
"mAP@0.4",
"mAP@0.5"
] | Temporal Context Network for Activity Localization in Videos |
In this paper, we address the problem of detecting unseen objects from RGB images and estimating their poses in 3D. We propose two mobile friendly networks: MobilePose-Base and MobilePose-Shape. The former is used when there is only pose supervision, and the latter is for the case when shape supervision is available, even a weak one. We revisit shape features used in previous methods, including segmentation and coordinate map. We explain when and why pixel-level shape supervision can improve pose estimation. Consequently, we add shape prediction as an intermediate layer in the MobilePose-Shape, and let the network learn pose from shape. Our models are trained on mixed real and synthetic data, with weak and noisy shape supervision. They are ultra lightweight that can run in real-time on modern mobile devices (e.g. 36 FPS on Galaxy S20). Comparing with previous single-shot solutions, our method has higher accuracy, while using a significantly smaller model (2~3% in model size or number of parameters). | [] | [
"Monocular 3D Object Detection",
"Pose Estimation"
] | [] | [
"Google Objectron"
] | [
"Average Precision at 0.5 3D IoU",
"MPE",
"AP at 10' Elevation error",
"AP at 15' Azimuth error"
] | MobilePose: Real-Time Pose Estimation for Unseen Objects with Weak Shape Supervision |
In this paper we describe the TurkuNLP entry at the CoNLL 2018 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies. Compared to the last year, this year the shared task includes two new main metrics to measure the morphological tagging and lemmatization accuracies in addition to syntactic trees. Basing our motivation into these new metrics, we developed an end-to-end parsing pipeline especially focusing on developing a novel and state-of-the-art component for lemmatization. Our system reached the highest aggregate ranking on three main metrics out of 26 teams by achieving 1st place on metric involving lemmatization, and 2nd on both morphological tagging and parsing. | [] | [
"Dependency Parsing",
"Lemmatization",
"Machine Translation",
"Morphological Tagging",
"Word Embeddings"
] | [] | [
"Universal Dependencies"
] | [
"UAS",
"BLEX",
"LAS"
] | Turku Neural Parser Pipeline: An End-to-End System for the CoNLL 2018 Shared Task |
Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models. | [] | [
"Logical Reasoning Question Answering",
"Logical Reasoning Reading Comprehension",
"Machine Reading Comprehension",
"Question Answering",
"Reading Comprehension"
] | [] | [
"ReClor"
] | [
"Accuracy",
"Accuracy (hard)",
"Accuracy (easy)",
"Test"
] | ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning |
Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online. | [] | [
"3D Object Detection",
"Autonomous Driving",
"Autonomous Vehicles",
"Object Detection"
] | [] | [
"nuScenes"
] | [
"NDS"
] | nuScenes: A multimodal dataset for autonomous driving |
We present the Frontier Aware Search with backTracking (FAST) Navigator, a
general framework for action decoding, that achieves state-of-the-art results
on the Room-to-Room (R2R) Vision-and-Language navigation challenge of Anderson
et. al. (2018). Given a natural language instruction and photo-realistic image
views of a previously unseen environment, the agent was tasked with navigating
from source to target location as quickly as possible. While all current
approaches make local action decisions or score entire trajectories using beam
search, ours balances local and global signals when exploring an unobserved
environment. Importantly, this lets us act greedily but use global signals to
backtrack when necessary. Applying FAST framework to existing state-of-the-art
models achieved a 17% relative gain, an absolute 6% gain on Success rate
weighted by Path Length (SPL). | [] | [
"Vision and Language Navigation",
"Vision-Language Navigation"
] | [] | [
"Room2Room",
"VLN Challenge"
] | [
"length",
"spl",
"oracle success",
"success",
"error"
] | Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation |
Commonsense reasoning is fundamental to natural language understanding. While
traditional methods rely heavily on human-crafted features and knowledge bases,
we explore learning commonsense knowledge from a large amount of raw text via
unsupervised learning. We propose two neural network models based on the Deep
Structured Semantic Models (DSSM) framework to tackle two classic commonsense
reasoning tasks, Winograd Schema challenges (WSC) and Pronoun Disambiguation
(PDP). Evaluation shows that the proposed models effectively capture contextual
information in the sentence and co-reference information between pronouns and
nouns, and achieve significant improvement over previous state-of-the-art
approaches. | [] | [
"Common Sense Reasoning",
"Natural Language Understanding"
] | [] | [
"PDP60",
"Winograd Schema Challenge"
] | [
"Score",
"Accuracy"
] | Unsupervised Deep Structured Semantic Models for Commonsense Reasoning |
Convolutional neural networks have witnessed remarkable improvements in computational efficiency in recent years. A key driving force has been the idea of trading-off model expressivity and efficiency through a combination of $1\times 1$ and depth-wise separable convolutions in lieu of a standard convolutional layer. The price of the efficiency, however, is the sub-optimal flow of information across space and channels in the network. To overcome this limitation, we present MUXConv, a layer that is designed to increase the flow of information by progressively multiplexing channel and spatial information in the network, while mitigating computational complexity. Furthermore, to demonstrate the effectiveness of MUXConv, we integrate it within an efficient multi-objective evolutionary algorithm to search for the optimal model hyper-parameters while simultaneously optimizing accuracy, compactness, and computational efficiency. On ImageNet, the resulting models, dubbed MUXNets, match the performance (75.3% top-1 accuracy) and multiply-add operations (218M) of MobileNetV3 while being 1.6$\times$ more compact, and outperform other mobile models in all the three criteria. MUXNet also performs well under transfer learning and when adapted to object detection. On the ChestX-Ray 14 benchmark, its accuracy is comparable to the state-of-the-art while being $3.3\times$ more compact and $14\times$ more efficient. Similarly, detection on PASCAL VOC 2007 is 1.2% more accurate, 28% faster and 6% more compact compared to MobileNetV2. Code is available from https://github.com/human-analysis/MUXConv | [] | [
"Image Classification",
"Neural Architecture Search",
"Object Detection",
"Pneumonia Detection",
"Semantic Segmentation",
"Transfer Learning"
] | [] | [
"ChestX-ray14",
"CIFAR-10 Image Classification",
"ADE20K",
"CIFAR-100",
"CIFAR-10",
"ImageNet"
] | [
"Number of params",
"AUROC",
"Validation mIoU",
"Top 1 Accuracy",
"Percentage error",
"MACs",
"Percentage correct",
"Top-1 Error Rate",
"Params",
"FLOPS",
"Parameters",
"PARAMS",
"Top 5 Accuracy",
"Accuracy",
"Percentage Error"
] | MUXConv: Information Multiplexing in Convolutional Neural Networks |
Learning to follow instructions is of fundamental importance to autonomous agents for vision-and-language navigation (VLN). In this paper, we study how an agent can navigate long paths when learning from a corpus that consists of shorter ones. We show that existing state-of-the-art agents do not generalize well. To this end, we propose BabyWalk, a new VLN agent that is learned to navigate by decomposing long instructions into shorter ones (BabySteps) and completing them sequentially. A special design memory buffer is used by the agent to turn its past experiences into contexts for future steps. The learning process is composed of two phases. In the first phase, the agent uses imitation learning from demonstration to accomplish BabySteps. In the second phase, the agent uses curriculum-based reinforcement learning to maximize rewards on navigation tasks with increasingly longer instructions. We create two new benchmark datasets (of long navigation tasks) and use them in conjunction with existing ones to examine BabyWalk's generalization ability. Empirical results show that BabyWalk achieves state-of-the-art results on several metrics, in particular, is able to follow long instructions better. The codes and the datasets are released on our project page https://github.com/Sha-Lab/babywalk. | [] | [
"Imitation Learning",
"Vision and Language Navigation"
] | [] | [
"Cooperative Vision-and-Dialogue Navigation"
] | [
"spl",
"dist_to_end_reduction"
] | BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby Steps |
Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/ | [] | [
"Visual Navigation"
] | [] | [
"Cooperative Vision-and-Dialogue Navigation"
] | [
"spl",
"dist_to_end_reduction"
] | Vision-and-Dialog Navigation |
Subsets and Splits