paper_id
stringlengths
10
10
paper_url
stringlengths
37
80
title
stringlengths
4
518
abstract
stringlengths
3
7.27k
arxiv_id
stringlengths
9
16
url_abs
stringlengths
18
601
url_pdf
stringlengths
21
601
aspect_tasks
sequence
aspect_methods
sequence
aspect_datasets
sequence
_KV22_kfg3
https://paperswithcode.com/paper/hashgan-deep-learning-to-hash-with-pair
HashGAN: Deep Learning to Hash With Pair Conditional Wasserstein GAN
Deep learning to hash improves image retrieval performance by end-to-end representation learning and hash coding from training data with pairwise similarity information. Subject to the scarcity of similarity information that is often expensive to collect for many application domains, existing deep learning to hash methods may overfit the training data and result in substantial loss of retrieval quality. This paper presents HashGAN, a novel architecture for deep learning to hash, which learns compact binary hash codes from both real images and diverse images synthesized by generative models. The main idea is to augment the training data with nearly real images synthesized from a new Pair Conditional Wasserstein GAN (PC-WGAN) conditioned on the pairwise similarity information. Extensive experiments demonstrate that HashGAN can generate high-quality binary hash codes and yield state-of-the-art image retrieval performance on three benchmarks, NUS-WIDE, CIFAR-10, and MS-COCO.
null
http://openaccess.thecvf.com/content_cvpr_2018/html/Cao_HashGAN_Deep_Learning_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Cao_HashGAN_Deep_Learning_CVPR_2018_paper.pdf
[ "Image Retrieval", "Representation Learning" ]
[ "Convolution", "GAN" ]
[]
qGCiIITDFS
https://paperswithcode.com/paper/navigational-rule-derivation-an-algorithm-to
Navigational Rule Derivation: An algorithm to determine the effect of traffic signs on road networks
In this paper we present an algorithm to build a road network map enriched with traffic rules such as one-way streets and forbidden turns, based on the interpretation of already detected and classified traffic signs. Such algorithm helps to automatize the elaboration of maps for commercial navigation systems. Our solution is based on simulating navigation along the road network, determining at each point of interest the visibility of the signs and their effect on the roads. We test our approach in a small urban network and discuss various ways to generalize it to support more complex environments.
1611.06108
http://arxiv.org/abs/1611.06108v1
http://arxiv.org/pdf/1611.06108v1.pdf
[]
[]
[]
gAUvzC3ITa
https://paperswithcode.com/paper/data-augmentation-for-context-sensitive
Training Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in low-resource languages. In addition (as shown here), in a low-resource setting, a lemmatizer can learn more from $n$ labeled examples of distinct words (types) than from $n$ (contiguous) labeled tokens, since the latter contain far fewer distinct types. To combine the efficiency of type-based learning with the benefits of context, we propose a way to train a context-sensitive lemmatizer with little or no labeled corpus data, using inflection tables from the UniMorph project and raw text examples from Wikipedia that provide sentence contexts for the unambiguous UniMorph examples. Despite these being unambiguous examples, the model successfully generalizes from them, leading to improved results (both overall, and especially on unseen words) in comparison to a baseline that does not use context.
1904.01464
https://arxiv.org/abs/1904.01464v3
https://arxiv.org/pdf/1904.01464v3.pdf
[ "Data Augmentation", "Lemmatization" ]
[]
[]
JtXw7eM1Nd
https://paperswithcode.com/paper/quantum-neural-network-and-soft-quantum
Quantum Neural Network and Soft Quantum Computing
A new paradigm of quantum computing, namely, soft quantum computing, is proposed for nonclassical computation using real world quantum systems with naturally occurring environment-induced decoherence and dissipation. As a specific example of soft quantum computing, we suggest a quantum neural network, where the neurons connect pairwise via the "controlled Kraus operations", hoping to pave an easier and more realistic way to quantum artificial intelligence and even to better understanding certain functioning of the human brain. Our quantum neuron model mimics as much as possible the realistic neurons and meanwhile, uses quantum laws for processing information. The quantum features of the noisy neural network are uncovered by the presence of quantum discord and by non-commutability of quantum operations. We believe that our model puts quantum computing into a wider context and inspires the hope to build a soft quantum computer much earlier than the standard one.
1810.05025
http://arxiv.org/abs/1810.05025v1
http://arxiv.org/pdf/1810.05025v1.pdf
[]
[]
[]
MzHVMXsQLK
https://paperswithcode.com/paper/genegan-learning-object-transfiguration-and
GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data
Object Transfiguration replaces an object in an image with another object from a second image. For example it can perform tasks like "putting exactly those eyeglasses from image A on the nose of the person in image B". Usage of exemplar images allows more precise specification of desired modifications and improves the diversity of conditional image generation. However, previous methods that rely on feature space operations, require paired data and/or appearance models for training or disentangling objects from background. In this work, we propose a model that can learn object transfiguration from two unpaired sets of images: one set containing images that "have" that kind of object, and the other set being the opposite, with the mild constraint that the objects be located approximately at the same place. For example, the training data can be one set of reference face images that have eyeglasses, and another set of images that have not, both of which spatially aligned by face landmarks. Despite the weak 0/1 labels, our model can learn an "eyeglasses" subspace that contain multiple representatives of different types of glasses. Consequently, we can perform fine-grained control of generated images, like swapping the glasses in two images by swapping the projected components in the "eyeglasses" subspace, to create novel images of people wearing eyeglasses. Overall, our deterministic generative model learns disentangled attribute subspaces from weakly labeled data by adversarial training. Experiments on CelebA and Multi-PIE datasets validate the effectiveness of the proposed model on real world data, in generating images with specified eyeglasses, smiling, hair styles, and lighting conditions etc. The code is available online.
1705.04932
http://arxiv.org/abs/1705.04932v1
http://arxiv.org/pdf/1705.04932v1.pdf
[ "Conditional Image Generation", "Image Generation" ]
[]
[]
gKPn18xY_J
https://paperswithcode.com/paper/a-rotation-equivariant-convolutional-neural
A rotation-equivariant convolutional neural network model of primary visual cortex
Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1.
1809.10504
http://arxiv.org/abs/1809.10504v1
http://arxiv.org/pdf/1809.10504v1.pdf
[]
[]
[]
61R9S6kswx
https://paperswithcode.com/paper/robust-subspace-clustering
Robust subspace clustering
Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2009) 2790-2797] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.
1301.2603
http://arxiv.org/abs/1301.2603v3
http://arxiv.org/pdf/1301.2603v3.pdf
[]
[]
[]
HFj70zxGtx
https://paperswithcode.com/paper/knowledge-guided-disambiguation-for-large
Knowledge Guided Disambiguation for Large-Scale Scene Classification with Multi-Resolution CNNs
Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\%) and SUN397 (72.0\%). We release the code and models at~\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.
1610.01119
http://arxiv.org/abs/1610.01119v2
http://arxiv.org/pdf/1610.01119v2.pdf
[ "Scene Classification", "Scene Recognition" ]
[]
[]
-7VqBYZwcq
https://paperswithcode.com/paper/learning-to-make-analogies-by-contrasting
Learning to Make Analogies by Contrasting Abstract Relational Structure
Analogical reasoning has been a principal focus of various waves of AI research. Analogy is particularly challenging for machines because it requires relational structures to be represented such that they can be flexibly applied across diverse domains of experience. Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data. We find that the critical factor for inducing such a capacity is not an elaborate architecture, but rather, careful attention to the choice of data and the manner in which it is presented to the model. The most robust capacity for analogical reasoning is induced when networks learn analogies by contrasting abstract relational structures in their input domains, a training method that uses only the input data to force models to learn about important abstract features. Using this technique we demonstrate capacities for complex, visual and symbolic analogy making and generalisation in even the simplest neural network architectures.
1902.00120
http://arxiv.org/abs/1902.00120v1
http://arxiv.org/pdf/1902.00120v1.pdf
[]
[]
[]
bno4tRxQfj
https://paperswithcode.com/paper/an-analysis-of-word2vec-for-the-italian
An Analysis of Word2Vec for the Italian Language
Word representation is fundamental in NLP tasks, because it is precisely from the coding of semantic closeness between words that it is possible to think of teaching a machine to understand text. Despite the spread of word embedding concepts, still few are the achievements in linguistic contexts other than English. In this work, analysing the semantic capacity of the Word2Vec algorithm, an embedding for the Italian language is produced. Parameter setting such as the number of epochs, the size of the context window and the number of negatively backpropagated samples is explored.
2001.09332
https://arxiv.org/abs/2001.09332v1
https://arxiv.org/pdf/2001.09332v1.pdf
[]
[]
[]
mcwp8MhsbA
https://paperswithcode.com/paper/transformation-based-feature-computation-for
Transformation-based Feature Computation for Algorithm Portfolios
Instance-specific algorithm configuration and algorithm portfolios have been shown to offer significant improvements over single algorithm approaches in a variety of application domains. In the SAT and CSP domains algorithm portfolios have consistently dominated the main competitions in these fields for the past five years. For a portfolio approach to be effective there are two crucial conditions that must be met. First, there needs to be a collection of complementary solvers with which to make a portfolio. Second, there must be a collection of problem features that can accurately identify structural differences between instances. This paper focuses on the latter issue: feature representation, because, unlike SAT, not every problem has well-studied features. We employ the well-known SATzilla feature set, but compute alternative sets on different SAT encodings of CSPs. We show that regardless of what encoding is used to convert the instances, adequate structural information is maintained to differentiate between problem instances, and that this can be exploited to make an effective portfolio-based CSP solver.
1401.2474
http://arxiv.org/abs/1401.2474v1
http://arxiv.org/pdf/1401.2474v1.pdf
[]
[]
[]
ntRgpgw-sM
https://paperswithcode.com/paper/attention-with-intention-for-a-neural-network
Attention with Intention for a Neural Network Conversation Model
In a conversation or a dialogue process, attention and intention play intrinsic roles. This paper proposes a neural network based approach that models the attention and intention processes. It essentially consists of three recurrent networks. The encoder network is a word-level model representing source side sentences. The intention network is a recurrent network that models the dynamics of the intention process. The decoder network is a recurrent network produces responses to the input from the source side. It is a language model that is dependent on the intention and has an attention mechanism to attend to particular source side words, when predicting a symbol in the response. The model is trained end-to-end without labeling data. Experiments show that this model generates natural responses to user inputs.
1510.08565
http://arxiv.org/abs/1510.08565v3
http://arxiv.org/pdf/1510.08565v3.pdf
[ "Language Modelling" ]
[]
[]
3ipJCKw4Oa
https://paperswithcode.com/paper/learning-signed-determinantal-point-processes
Learning Signed Determinantal Point Processes through the Principal Minor Assignment Problem
Symmetric determinantal point processes (DPP) are a class of probabilistic models that encode the random selection of items that have a repulsive behavior. They have attracted a lot of attention in machine learning, where returning diverse sets of items is sought for. Sampling and learning these symmetric DPP's is pretty well understood. In this work, we consider a new class of DPP's, which we call signed DPP's, where we break the symmetry and allow attractive behaviors. We set the ground for learning signed DPP's through a method of moments, by solving the so called principal assignment problem for a class of matrices $K$ that satisfy $K_{i,j}=\pm K_{j,i}$, $i\neq j$, in polynomial time.
null
http://papers.nips.cc/paper/7966-learning-signed-determinantal-point-processes-through-the-principal-minor-assignment-problem
http://papers.nips.cc/paper/7966-learning-signed-determinantal-point-processes-through-the-principal-minor-assignment-problem.pdf
[ "Point Processes" ]
[]
[]
L7VCy6nIM_
https://paperswithcode.com/paper/exploiting-contextual-information-via-dynamic
Exploiting Contextual Information via Dynamic Memory Network for Event Detection
The task of event detection involves identifying and categorizing event triggers. Contextual information has been shown effective on the task. However, existing methods which utilize contextual information only process the context once. We argue that the context can be better exploited by processing the context multiple times, allowing the model to perform complex reasoning and to generate better context representation, thus improving the overall performance. Meanwhile, dynamic memory network (DMN) has demonstrated promising capability in capturing contextual information and has been applied successfully to various tasks. In light of the multi-hop mechanism of the DMN to model the context, we propose the trigger detection dynamic memory network (TD-DMN) to tackle the event detection problem. We performed a five-fold cross-validation on the ACE-2005 dataset and experimental results show that the multi-hop mechanism does improve the performance and the proposed model achieves best $F_1$ score compared to the state-of-the-art methods.
1810.03449
http://arxiv.org/abs/1810.03449v1
http://arxiv.org/pdf/1810.03449v1.pdf
[]
[ "Softmax", "GRU", "Dynamic Memory Network", "Memory Network" ]
[]
WhRTju-KbR
https://paperswithcode.com/paper/efficient-character-level-document
Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers
Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.
1602.00367
http://arxiv.org/abs/1602.00367v1
http://arxiv.org/pdf/1602.00367v1.pdf
[ "Document Classification" ]
[ "Convolution" ]
[]
bit7ktMzAO
https://paperswithcode.com/paper/annotating-a-corpus-of-human-interaction-with
Annotating a corpus of human interaction with prosodic profiles --- focusing on Mandarin repair/disfluency
This study describes the construction of a manually annotated speech corpus that focuses on the sound profiles of repair/disfluency in Mandarin conversational interaction. Specifically, the paper focuses on how the tag set of prosodic profiles of the recycling repair culled from both audio-tapped and video-tapped, face-to-face Mandarin interaction are decided. By the methodology of both acoustic records and impressionistic judgements, 260 instances of Mandarin recycling repair are annotated with sound profiles including: pitch, duration, loudness, silence, and other observable prosodic cues (i.e. sound stretch and cut-offs). The study further introduces some possible applications of the current corpus, such as the implementation of the annotated data for analyzing the correlation between sound profiles of Mandarin repair and the interactional function of the repair. The goal of constructing the corpus is to facilitate an interdisciplinary study that concentrates on broadening the interactional linguistic theory by simultaneously paying close attention to the sound profiles emerged from interaction.
null
https://www.aclweb.org/anthology/L12-1205/
http://www.lrec-conf.org/proceedings/lrec2012/pdf/399_Paper.pdf
[]
[]
[]
mSjTL-HeUD
https://paperswithcode.com/paper/variational-temporal-abstraction
Variational Temporal Abstraction
We introduce a variational approach to learning and inference of temporally hierarchical structure and representation for sequential data. We propose the Variational Temporal Abstraction (VTA), a hierarchical recurrent state space model that can infer the latent temporal structure and thus perform the stochastic state transition hierarchically. We also propose to apply this model to implement the jumpy-imagination ability in imagination-augmented agent-learning in order to improve the efficiency of the imagination. In experiments, we demonstrate that our proposed method can model 2D and 3D visual sequence datasets with interpretable temporal structure discovery and that its application to jumpy imagination enables more efficient agent-learning in a 3D navigation task.
1910.00775
https://arxiv.org/abs/1910.00775v1
https://arxiv.org/pdf/1910.00775v1.pdf
[]
[]
[]
PVY2N3m1I-
https://paperswithcode.com/paper/stark-structured-dictionary-learning-through
STARK: Structured Dictionary Learning Through Rank-one Tensor Recovery
In recent years, a class of dictionaries have been proposed for multidimensional (tensor) data representation that exploit the structure of tensor data by imposing a Kronecker structure on the dictionary underlying the data. In this work, a novel algorithm called "STARK" is provided to learn Kronecker structured dictionaries that can represent tensors of any order. By establishing that the Kronecker product of any number of matrices can be rearranged to form a rank-1 tensor, we show that Kronecker structure can be enforced on the dictionary by solving a rank-1 tensor recovery problem. Because rank-1 tensor recovery is a challenging nonconvex problem, we resort to solving a convex relaxation of this problem. Empirical experiments on synthetic and real data show promising results for our proposed algorithm.
1711.04887
http://arxiv.org/abs/1711.04887v1
http://arxiv.org/pdf/1711.04887v1.pdf
[ "Dictionary Learning" ]
[]
[]
0zlyUgOvye
https://paperswithcode.com/paper/parsing-speech-a-neural-approach-to
Parsing Speech: A Neural Approach to Integrating Lexical and Acoustic-Prosodic Information
In conversational speech, the acoustic signal provides cues that help listeners disambiguate difficult parses. For automatically parsing spoken utterances, we introduce a model that integrates transcribed text and acoustic-prosodic features using a convolutional neural network over energy and pitch trajectories coupled with an attention-based recurrent neural network that accepts text and prosodic features. We find that different types of acoustic-prosodic features are individually helpful, and together give statistically significant improvements in parse and disfluency detection F1 scores over a strong text-only baseline. For this study with known sentence boundaries, error analyses show that the main benefit of acoustic-prosodic features is in sentences with disfluencies, attachment decisions are most improved, and transcription errors obscure gains from prosody.
1704.07287
http://arxiv.org/abs/1704.07287v2
http://arxiv.org/pdf/1704.07287v2.pdf
[]
[]
[]
HAhZMzkfvf
https://paperswithcode.com/paper/convex-calibration-dimension-for-multiclass
Convex Calibration Dimension for Multiclass Loss Matrices
We study consistency properties of surrogate loss functions for general multiclass learning problems, defined by a general multiclass loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be calibrated with respect to a loss matrix in this setting. We then introduce the notion of convex calibration dimension of a multiclass loss matrix, which measures the smallest `size' of a prediction space in which it is possible to design a convex surrogate that is calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, we apply our framework to study various subset ranking losses, and use the convex calibration dimension as a tool to show both the existence and non-existence of various types of convex calibrated surrogates for these losses. Our results strengthen recent results of Duchi et al. (2010) and Calauzenes et al. (2012) on the non-existence of certain types of convex calibrated surrogates in subset ranking. We anticipate the convex calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.
1408.2764
http://arxiv.org/abs/1408.2764v2
http://arxiv.org/pdf/1408.2764v2.pdf
[]
[]
[]
Hz5s3K-2eL
https://paperswithcode.com/paper/robust-speech-recognition-using-consensus
Robust speech recognition using consensus function based on multi-layer networks
The clustering ensembles mingle numerous partitions of a specified data into a single clustering solution. Clustering ensemble has emerged as a potent approach for ameliorating both the forcefulness and the stability of unsupervised classification results. One of the major problems in clustering ensembles is to find the best consensus function. Finding final partition from different clustering results requires skillfulness and robustness of the classification algorithm. In addition, the major problem with the consensus function is its sensitivity to the used data sets quality. This limitation is due to the existence of noisy, silence or redundant data. This paper proposes a novel consensus function of cluster ensembles based on Multilayer networks technique and a maintenance database method. This maintenance database approach is used in order to handle any given noisy speech and, thus, to guarantee the quality of databases. This can generates good results and efficient data partitions. To show its effectiveness, we support our strategy with empirical evaluation using distorted speech from Aurora speech databases.
1507.06023
http://arxiv.org/abs/1507.06023v1
http://arxiv.org/pdf/1507.06023v1.pdf
[ "Robust Speech Recognition", "Speech Recognition" ]
[]
[]
MpZ7UOIL53
https://paperswithcode.com/paper/on-the-generalization-gap-in
On the Generalization Gap in Reparameterizable Reinforcement Learning
Understanding generalization in reinforcement learning (RL) is a significant challenge, as many common assumptions of traditional supervised learning theory do not apply. We focus on the special class of reparameterizable RL problems, where the trajectory distribution can be decomposed using the reparametrization trick. For this problem class, estimating the expected return is efficient and the trajectory can be computed deterministically given peripheral random variables, which enables us to study reparametrizable RL using supervised learning and transfer learning theory. Through these relationships, we derive guarantees on the gap between the expected and empirical return for both intrinsic and external errors, based on Rademacher complexity as well as the PAC-Bayes bound. Our bound suggests the generalization capability of reparameterizable RL is related to multiple factors including "smoothness" of the environment transition, reward and agent policy function class. We also empirically verify the relationship between the generalization gap and these factors through simulations.
1905.12654
https://arxiv.org/abs/1905.12654v1
https://arxiv.org/pdf/1905.12654v1.pdf
[ "Transfer Learning" ]
[]
[]
nK92zrGxD-
https://paperswithcode.com/paper/handwritten-and-machine-printed-ocr-for-geez
Handwritten and Machine printed OCR for Geez Numbers Using Artificial Neural Network
Researches have been done on Ethiopic scripts. However studies excluded the Geez numbers from the studies because of different reasons. This paper presents offline handwritten and machine printed Geez number recognition using feed forward back propagation artificial neural network. On this study, different Geez image characters were collected from google image search and three persons are instructed to write the numbers using pencil. In total we have collected 560 numbers of characters. We have used 460 of the characters for training and 100 are used for testing. Accordingly we have achieved overall all classification ~89:88%
1911.06845
https://arxiv.org/abs/1911.06845v1
https://arxiv.org/pdf/1911.06845v1.pdf
[ "Image Retrieval", "Optical Character Recognition" ]
[]
[]
5VnBoVLg-p
https://paperswithcode.com/paper/variable-viewpoint-representations-for-3d
Variable-Viewpoint Representations for 3D Object Recognition
For the problem of 3D object recognition, researchers using deep learning methods have developed several very different input representations, including "multi-view" snapshots taken from discrete viewpoints around an object, as well as "spherical" representations consisting of a dense map of essentially ray-traced samples of the object from all directions. These representations offer trade-offs in terms of what object information is captured and to what degree of detail it is captured, but it is not clear how to measure these information trade-offs since the two types of representations are so different. We demonstrate that both types of representations in fact exist at two extremes of a common representational continuum, essentially choosing to prioritize either the number of views of an object or the pixels (i.e., field of view) allotted per view. We identify interesting intermediate representations that lie at points in between these two extremes, and we show, through systematic empirical experiments, how accuracy varies along this continuum as a function of input information as well as the particular deep learning architecture that is used.
2002.03131
https://arxiv.org/abs/2002.03131v1
https://arxiv.org/pdf/2002.03131v1.pdf
[ "3D Object Recognition", "Object Recognition" ]
[]
[]
l8AGRiP3kF
https://paperswithcode.com/paper/pitfalls-of-learning-a-reward-function-online
Pitfalls of learning a reward function online
In some agent designs like inverse reinforcement learning an agent needs to learn its own reward function. Learning the reward function and optimising for it are typically two different processes, usually performed at different stages. We consider a continual (``one life'') learning approach where the agent both learns the reward function and optimises for it at the same time. We show that this comes with a number of pitfalls, such as deliberately manipulating the learning process in one direction, refusing to learn, ``learning'' facts already known to the agent, and making decisions that are strictly dominated (for all relevant reward functions). We formally introduce two desirable properties: the first is `unriggability', which prevents the agent from steering the learning process in the direction of a reward function that is easier to optimise. The second is `uninfluenceability', whereby the reward-function learning process operates by learning facts about the environment. We show that an uninfluenceable process is automatically unriggable, and if the set of possible environments is sufficiently rich, the converse is true too.
2004.13654
https://arxiv.org/abs/2004.13654v1
https://arxiv.org/pdf/2004.13654v1.pdf
[]
[]
[]
d6EGZOvrpj
https://paperswithcode.com/paper/dont-paraphrase-detect-rapid-and-effective
Don't paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing
A major hurdle on the road to conversational interfaces is the difficulty in collecting data that maps language utterances to logical forms. One prominent approach for data collection has been to automatically generate pseudo-language paired with logical forms, and paraphrase the pseudo-language to natural language through crowdsourcing (Wang et al., 2015). However, this data collection procedure often leads to low performance on real data, due to a mismatch between the true distribution of examples and the distribution induced by the data collection procedure. In this paper, we thoroughly analyze two sources of mismatch in this process: the mismatch in logical form distribution and the mismatch in language distribution between the true and induced distributions. We quantify the effects of these mismatches, and propose a new data collection approach that mitigates them. Assuming access to unlabeled utterances from the true distribution, we combine crowdsourcing with a paraphrase model to detect correct logical forms for the unlabeled utterances. On two datasets, our method leads to 70.6 accuracy on average on the true distribution, compared to 51.3 in paraphrasing-based data collection.
1908.09940
https://arxiv.org/abs/1908.09940v2
https://arxiv.org/pdf/1908.09940v2.pdf
[ "Semantic Parsing" ]
[]
[]
pIVkSLUero
https://paperswithcode.com/paper/automatic-estimation-of-sphere-centers-from
Automatic Estimation of Sphere Centers from Images of Calibrated Cameras
Calibration of devices with different modalities is a key problem in robotic vision. Regular spatial objects, such as planes, are frequently used for this task. This paper deals with the automatic detection of ellipses in camera images, as well as to estimate the 3D position of the spheres corresponding to the detected 2D ellipses. We propose two novel methods to (i) detect an ellipse in camera images and (ii) estimate the spatial location of the corresponding sphere if its size is known. The algorithms are tested both quantitatively and qualitatively. They are applied for calibrating the sensor system of autonomous cars equipped with digital cameras, depth sensors and LiDAR devices.
2002.10217
https://arxiv.org/abs/2002.10217v1
https://arxiv.org/pdf/2002.10217v1.pdf
[]
[]
[]
oXkugsDr3n
https://paperswithcode.com/paper/fast-and-simple-natural-gradient-variational
Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations
Natural-gradient methods enable fast and simple algorithms for variational inference, but due to computational difficulties, their use is mostly limited to \emph{minimal} exponential-family (EF) approximations. In this paper, we extend their application to estimate \emph{structured} approximations such as mixtures of EF distributions. Such approximations can fit complex, multimodal posterior distributions and are generally more accurate than unimodal EF approximations. By using a \emph{minimal conditional-EF} representation of such approximations, we derive simple natural-gradient updates. Our empirical results demonstrate a faster convergence of our natural-gradient method compared to black-box gradient-based methods with reparameterization gradients. Our work expands the scope of natural gradients for Bayesian inference and makes them more widely applicable than before.
1906.02914
https://arxiv.org/abs/1906.02914v2
https://arxiv.org/pdf/1906.02914v2.pdf
[ "Bayesian Inference", "Variational Inference" ]
[]
[]
6ZzzG4G5QF
https://paperswithcode.com/paper/local-and-holistic-structure-preserving-image
Local- and Holistic- Structure Preserving Image Super Resolution via Deep Joint Component Learning
Recently, machine learning based single image super resolution (SR) approaches focus on jointly learning representations for high-resolution (HR) and low-resolution (LR) image patch pairs to improve the quality of the super-resolved images. However, due to treat all image pixels equally without considering the salient structures, these approaches usually fail to produce visual pleasant images with sharp edges and fine details. To address this issue, in this work we present a new novel SR approach, which replaces the main building blocks of the classical interpolation pipeline by a flexible, content-adaptive deep neural networks. In particular, two well-designed structure-aware components, respectively capturing local- and holistic- image contents, are naturally incorporated into the fully-convolutional representation learning to enhance the image sharpness and naturalness. Extensively evaluations on several standard benchmarks (e.g., Set5, Set14 and BSD200) demonstrate that our approach can achieve superior results, especially on the image with salient structures, over many existing state-of-the-art SR methods under both quantitative and qualitative measures.
1607.07220
http://arxiv.org/abs/1607.07220v1
http://arxiv.org/pdf/1607.07220v1.pdf
[ "Image Super-Resolution", "Representation Learning", "Super Resolution", "Super-Resolution" ]
[]
[]
1NnwGNWeJq
https://paperswithcode.com/paper/dont-relax-early-stopping-for-convex
Don't relax: early stopping for convex regularization
We consider the problem of designing efficient regularization algorithms when regularization is encoded by a (strongly) convex functional. Unlike classical penalization methods based on a relaxation approach, we propose an iterative method where regularization is achieved via early stopping. Our results show that the proposed procedure achieves the same recovery accuracy as penalization methods, while naturally integrating computational considerations. An empirical analysis on a number of problems provides promising results with respect to the state of the art.
1707.05422
http://arxiv.org/abs/1707.05422v1
http://arxiv.org/pdf/1707.05422v1.pdf
[]
[]
[]
3ncChetiDU
https://paperswithcode.com/paper/complementary-similarity-learning-using
Complementary-Similarity Learning using Quadruplet Network
We propose a novel learning framework to answer questions such as "if a user is purchasing a shirt, what other items will (s)he need with the shirt?" Our framework learns distributed representations for items from available textual data, with the learned representations representing items in a latent space expressing functional complementarity as well similarity. In particular, our framework places functionally similar items close together in the latent space, while also placing complementary items closer than non-complementary items, but farther away than similar items. In this study, we introduce a new dataset of similar, complementary, and negative items derived from the Amazon co-purchase dataset. For evaluation purposes, we focus our approach on clothing and fashion verticals. As per our knowledge, this is the first attempt to learn similar and complementary relationships simultaneously through just textual title metadata. Our framework is applicable across a broad set of items in the product catalog and can generate quality complementary item recommendations at scale.
1908.09928
https://arxiv.org/abs/1908.09928v2
https://arxiv.org/pdf/1908.09928v2.pdf
[]
[]
[]
1YxzQQ1KB-
https://paperswithcode.com/paper/mastering-sketching-adversarial-augmentation
Mastering Sketching: Adversarial Augmentation for Structured Prediction
We present an integral framework for training sketch simplification networks that convert challenging rough sketches into clean line drawings. Our approach augments a simplification network with a discriminator network, training both networks jointly so that the discriminator network discerns whether a line drawing is a real training data or the output of the simplification network, which in turn tries to fool it. This approach has two major advantages. First, because the discriminator network learns the structure in line drawings, it encourages the output sketches of the simplification network to be more similar in appearance to the training sketches. Second, we can also train the simplification network with additional unsupervised data, using the discriminator network as a substitute teacher. Thus, by adding only rough sketches without simplified line drawings, or only line drawings without the original rough sketches, we can improve the quality of the sketch simplification. We show how our framework can be used to train models that significantly outperform the state of the art in the sketch simplification task, despite using the same architecture for inference. We additionally present an approach to optimize for a single image, which improves accuracy at the cost of additional computation time. Finally, we show that, using the same framework, it is possible to train the network to perform the inverse problem, i.e., convert simple line sketches into pencil drawings, which is not possible using the standard mean squared error loss. We validate our framework with two user tests, where our approach is preferred to the state of the art in sketch simplification 92.3% of the time and obtains 1.2 more points on a scale of 1 to 5.
1703.08966
http://arxiv.org/abs/1703.08966v1
http://arxiv.org/pdf/1703.08966v1.pdf
[ "Structured Prediction" ]
[ "LINE" ]
[]
HzC2q1XdPg
https://paperswithcode.com/paper/image-denoising-using-new-adaptive-based
Image Denoising using New Adaptive Based Median Filters
Noise is a major issue while transferring images through all kinds of electronic communication. One of the most common noise in electronic communication is an impulse noise which is caused by unstable voltage. In this paper, the comparison of known image denoising techniques is discussed and a new technique using the decision based approach has been used for the removal of impulse noise. All these methods can primarily preserve image details while suppressing impulsive noise. The principle of these techniques is at first introduced and then analysed with various simulation results using MATLAB. Most of the previously known techniques are applicable for the denoising of images corrupted with less noise density. Here a new decision based technique has been presented which shows better performances than those already being used. The comparisons are made based on visual appreciation and further quantitatively by Mean Square error (MSE) and Peak Signal to Noise Ratio (PSNR) of different filtered images..
1410.2175
http://arxiv.org/abs/1410.2175v1
http://arxiv.org/pdf/1410.2175v1.pdf
[ "Denoising", "Image Denoising" ]
[]
[]
15mFCBIrFk
https://paperswithcode.com/paper/asac-active-sensing-using-actor-critic-models
ASAC: Active Sensing using Actor-Critic models
Deciding what and when to observe is critical when making observations is costly. In a medical setting where observations can be made sequentially, making these observations (or not) should be an active choice. We refer to this as the active sensing problem. In this paper, we propose a novel deep learning framework, which we call ASAC (Active Sensing using Actor-Critic models) to address this problem. ASAC consists of two networks: a selector network and a predictor network. The selector network uses previously selected observations to determine what should be observed in the future. The predictor network uses the observations selected by the selector network to predict a label, providing feedback to the selector network (well-selected variables should be predictive of the label). The goal of the selector network is then to select variables that balance the cost of observing the selected variables with their predictive power; we wish to preserve the conditional label distribution. During training, we use the actor-critic models to allow the loss of the selector to be "back-propagated" through the sampling process. The selector network "acts" by selecting future observations to make. The predictor network acts as a "critic" by feeding predictive errors for the selected variables back to the selector network. In our experiments, we show that ASAC significantly outperforms state-of-the-arts in two real-world medical datasets.
1906.06796
https://arxiv.org/abs/1906.06796v1
https://arxiv.org/pdf/1906.06796v1.pdf
[]
[]
[]
On9qgWOuc8
https://paperswithcode.com/paper/gradient-descent-quantizes-relu-network
Gradient Descent Quantizes ReLU Network Features
Deep neural networks are often trained in the over-parametrized regime (i.e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem. Several studies have highlighted the fact that the training procedure, i.e. mini-batch Stochastic Gradient Descent (SGD) leads to solutions that have specific properties in the loss landscape. However, even with plain Gradient Descent (GD) the solutions found in the over-parametrized regime are pretty good and this phenomenon is poorly understood. We propose an analysis of this behavior for feedforward networks with a ReLU activation function under the assumption of small initialization and learning rate and uncover a quantization effect: The weight vectors tend to concentrate at a small number of directions determined by the input data. As a consequence, we show that for given input data there are only finitely many, "simple" functions that can be obtained, independent of the network size. This puts these functions in analogy to linear interpolations (for given input data there are finitely many triangulations, which each determine a function by linear interpolation). We ask whether this analogy extends to the generalization properties - while the usual distribution-independent generalization property does not hold, it could be that for e.g. smooth functions with bounded second derivative an approximation property holds which could "explain" generalization of networks (of unbounded size) to unseen inputs.
1803.08367
http://arxiv.org/abs/1803.08367v1
http://arxiv.org/pdf/1803.08367v1.pdf
[ "Quantization" ]
[]
[]
DzxpxgbsVn
https://paperswithcode.com/paper/a-global-approach-for-solving-edge-matching
A Global Approach for Solving Edge-Matching Puzzles
We consider apictorial edge-matching puzzles, in which the goal is to arrange a collection of puzzle pieces with colored edges so that the colors match along the edges of adjacent pieces. We devise an algebraic representation for this problem and provide conditions under which it exactly characterizes a puzzle. Using the new representation, we recast the combinatorial, discrete problem of solving puzzles as a global, polynomial system of equations with continuous variables. We further propose new algorithms for generating approximate solutions to the continuous problem by solving a sequence of convex relaxations.
1409.5957
http://arxiv.org/abs/1409.5957v2
http://arxiv.org/pdf/1409.5957v2.pdf
[]
[]
[]
EmobbbmVMs
https://paperswithcode.com/paper/credit-card-fraud-detection-using-autoencoder
Credit Card Fraud Detection Using Autoencoder Neural Network
Imbalanced data classification problem has always been a popular topic in the field of machine learning research. In order to balance the samples between majority and minority class. Oversampling algorithm is used to synthesize new minority class samples, but it could bring in noise. Pointing to the noise problems, this paper proposed a denoising autoencoder neural network (DAE) algorithm which can not only oversample minority class sample through misclassification cost, but it can denoise and classify the sampled dataset. Through experiments, compared with the denoising autoencoder neural network (DAE) with oversampling process and traditional fully connected neural networks, the results showed the proposed algorithm improves the classification accuracy of minority class of imbalanced datasets.
1908.11553
https://arxiv.org/abs/1908.11553v1
https://arxiv.org/pdf/1908.11553v1.pdf
[ "Denoising", "Fraud Detection" ]
[ "Denoising Autoencoder", "AutoEncoder" ]
[]
Vr7PZmcswH
https://paperswithcode.com/paper/maximum-entropy-kernels-for-system
Maximum Entropy Kernels for System Identification
A new nonparametric approach for system identification has been recently proposed where the impulse response is modeled as the realization of a zero-mean Gaussian process whose covariance (kernel) has to be estimated from data. In this scheme, quality of the estimates crucially depends on the parametrization of the covariance of the Gaussian process. A family of kernels that have been shown to be particularly effective in the system identification framework is the family of Diagonal/Correlated (DC) kernels. Maximum entropy properties of a related family of kernels, the Tuned/Correlated (TC) kernels, have been recently pointed out in the literature. In this paper we show that maximum entropy properties indeed extend to the whole family of DC kernels. The maximum entropy interpretation can be exploited in conjunction with results on matrix completion problems in the graphical models literature to shed light on the structure of the DC kernel. In particular, we prove that the DC kernel admits a closed-form factorization, inverse and determinant. These results can be exploited both to improve the numerical stability and to reduce the computational complexity associated with the computation of the DC estimator.
1411.5620
http://arxiv.org/abs/1411.5620v2
http://arxiv.org/pdf/1411.5620v2.pdf
[ "Matrix Completion" ]
[ "Gaussian Process" ]
[]
MY9UCW_drX
https://paperswithcode.com/paper/investigating-simple-object-representations
Investigating Simple Object Representations in Model-Free Deep Reinforcement Learning
We explore the benefits of augmenting state-of-the-art model-free deep reinforcement algorithms with simple object representations. Following the Frostbite challenge posited by Lake et al. (2017), we identify object representations as a critical cognitive capacity lacking from current reinforcement learning agents. We discover that providing the Rainbow model (Hessel et al.,2018) with simple, feature-engineered object representations substantially boosts its performance on the Frostbite game from Atari 2600. We then analyze the relative contributions of the representations of different types of objects, identify environment states where these representations are most impactful, and examine how these representations aid in generalizing to novel situations.
2002.06703
https://arxiv.org/abs/2002.06703v2
https://arxiv.org/pdf/2002.06703v2.pdf
[]
[]
[]
5968hQ9Ofo
https://paperswithcode.com/paper/explaining-the-unexplained-a-class-enhanced
Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks
In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process.
1704.04133
http://arxiv.org/abs/1704.04133v2
http://arxiv.org/pdf/1704.04133v2.pdf
[ "Decision Making" ]
[]
[]
Oru3Ramz2L
https://paperswithcode.com/paper/190601727
SemEval-2019 Task 8: Fact Checking in Community Question Answering Forums
We present SemEval-2019 Task 8 on Fact Checking in Community Question Answering Forums, which features two subtasks. Subtask A is about deciding whether a question asks for factual information vs. an opinion/advice vs. just socializing. Subtask B asks to predict whether an answer to a factual question is true, false or not a proper answer. We received 17 official submissions for subtask A and 11 official submissions for Subtask B. For subtask A, all systems improved over the majority class baseline. For Subtask B, all systems were below a majority class baseline, but several systems were very close to it. The leaderboard and the data from the competition can be found at http://competitions.codalab.org/competitions/20022
1906.01727
https://arxiv.org/abs/1906.01727v1
https://arxiv.org/pdf/1906.01727v1.pdf
[ "Community Question Answering", "Question Answering" ]
[]
[]
C9nnknXump
https://paperswithcode.com/paper/robust-marine-buoy-placement-for-ship
Robust Marine Buoy Placement for Ship Detection Using Dropout K-Means
Marine buoys aid in the battle against Illegal, Unreported and Unregulated (IUU) fishing by detecting fishing vessels in their vicinity. Marine buoys, however, may be disrupted by natural causes and buoy vandalism. In this paper, we formulate marine buoy placement as a clustering problem, and propose dropout k-means and dropout k-median to improve placement robustness to buoy disruption. We simulated the passage of ships in the Gabonese waters near West Africa using historical Automatic Identification System (AIS) data, then compared the ship detection probability of dropout k-means to classic k-means and dropout k-median to classic k-median. With 5 buoys, the buoy arrangement computed by classic k-means, dropout k-means, classic k-median and dropout k-median have ship detection probabilities of 38%, 45%, 48% and 52%.
2001.00564
https://arxiv.org/abs/2001.00564v2
https://arxiv.org/pdf/2001.00564v2.pdf
[]
[ "Dropout" ]
[]
oxEewCMqXG
https://paperswithcode.com/paper/object-detection-of-satellite-images-using
Object Detection of Satellite Images Using Multi-Channel Higher-order Local Autocorrelation
The Earth observation satellites have been monitoring the earth's surface for a long time, and the images taken by the satellites contain large amounts of valuable data. However, it is extremely hard work to manually analyze such huge data. Thus, a method of automatic object detection is needed for satellite images to facilitate efficient data analyses. This paper describes a new image feature extended from higher-order local autocorrelation to the object detection of multispectral satellite images. The feature has been extended to extract spectral inter-relationships in addition to spatial relationships to fully exploit multispectral information. The results of experiments with object detection tasks conducted to evaluate the effectiveness of the proposed feature extension indicate that the feature realized a higher performance compared to existing methods.
1707.09099
http://arxiv.org/abs/1707.09099v1
http://arxiv.org/pdf/1707.09099v1.pdf
[ "Object Detection" ]
[]
[]
VaC2iAlrzx
https://paperswithcode.com/paper/automatic-identification-of-traditional
Automatic Identification of Traditional Colombian Music Genres based on Audio Content Analysis and Machine Learning Technique
Colombia has a diversity of genres in traditional music, which allows to express the richness of the Colombian culture according to the region. This musical diversity is the result of a mixture of African, native Indigenous, and European influences. Organizing large collections of songs is a time consuming task that requires that a human listens to fragments of audio to identify genre, singer, year, instruments and other relevant characteristics that allow to index the song dataset. This paper presents a method to automatically identify the genre of a Colombian song by means of its audio content. The method extracts audio features that are used to train a machine learning model that learns to classify the genre. The method was evaluated in a dataset of 180 musical pieces belonging to six folkloric Colombian music genres: Bambuco, Carranga, Cumbia, Joropo, Pasillo, and Vallenato. Results show that it is possible to automatically identify the music genre in spite of the complexity of Colombian rhythms reaching an average accuracy of 69\%.
1911.03372
https://arxiv.org/abs/1911.03372v1
https://arxiv.org/pdf/1911.03372v1.pdf
[]
[]
[]
TvDHhjhZ5U
https://paperswithcode.com/paper/two-step-surface-damage-detection-scheme
Surface Damage Detection Scheme using Convolutional Neural Network and Artificial Neural Network
Surface damage on concrete is important as the damage can affect the structural integrity of the structure. This paper proposes a two-step surface damage detection scheme using Convolutional Neural Network (CNN) and Artificial Neural Network (ANN). The CNN classifies given input images into two categories: positive and negative. The positive category is where the surface damage is present within the image, otherwise the image is classified as negative. This is an image-based classification. The ANN accepts image inputs that have been classified as positive by the ANN. This reduces the number of images that are further processed by the ANN. The ANN performs feature-based classification, in which the features are extracted from the detected edges within the image. The edges are detected using Canny edge detection. A total of 19 features are extracted from the detected edges. These features are inputs into the ANN. The purpose of the ANN is to highlight only the positive damaged edges within the image. The CNN achieves an accuracy of 80.7% for image classification and the ANN achieves an accuracy of 98.1% for surface detection. The decreased accuracy in the CNN is due to the false positive detection, however false positives are tolerated whereas false negatives are not. The false negative detection for both CNN and ANN in the two-step scheme are 0%.
2003.10760
https://arxiv.org/abs/2003.10760v2
https://arxiv.org/pdf/2003.10760v2.pdf
[ "Edge Detection", "Image Classification" ]
[]
[]
MSoPJG0fax
https://paperswithcode.com/paper/smart-deep-copy-paste
Smart, Deep Copy-Paste
In this work, we propose a novel system for smart copy-paste, enabling the synthesis of high-quality results given a masked source image content and a target image context as input. Our system naturally resolves both shading and geometric inconsistencies between source and target image, resulting in a merged result image that features the content from the pasted source image, seamlessly pasted into the target context. Our framework is based on a novel training image transformation procedure that allows to train a deep convolutional neural network end-to-end to automatically learn a representation that is suitable for copy-pasting. Our training procedure works with any image dataset without additional information such as labels, and we demonstrate the effectiveness of our system on two popular datasets, high-resolution face images and the more complex Cityscapes dataset. Our technique outperforms the current state of the art on face images, and we show promising results on the Cityscapes dataset, demonstrating that our system generalizes to much higher resolution than the training data.
1903.06763
http://arxiv.org/abs/1903.06763v1
http://arxiv.org/pdf/1903.06763v1.pdf
[]
[]
[]
FLoWp5723W
https://paperswithcode.com/paper/hierarchical-deep-double-q-routing
Hierarchical Deep Double Q-Routing
This paper explores a deep reinforcement learning approach applied to the packet routing problem with high-dimensional constraints instigated by dynamic and autonomous communication networks. Our approach is motivated by the fact that centralized path calculation approaches are often not scalable, whereas the distributed approaches with locally acting nodes are not fully aware of the end-to-end performance. We instead hierarchically distribute the path calculation over designated nodes in the network while taking into account the end-to-end performance. Specifically, we develop a hierarchical cluster-oriented adaptive per-flow path calculation mechanism by leveraging the Deep Double Q-network (DDQN) algorithm, where the end-to-end paths are calculated by the source nodes with the assistance of cluster (group) leaders at different hierarchical levels. In our approach, a deferred composite reward is designed to capture the end-to-end performance through a feedback signal from the source nodes to the group leaders and captures the local network performance through the local resource assessments by the group leaders. This approach scales in large networks, adapts to the dynamic demand, utilizes the network resources efficiently and can be applied to segment routing.
1910.04041
https://arxiv.org/abs/1910.04041v3
https://arxiv.org/pdf/1910.04041v3.pdf
[]
[]
[]
u77Xhwg24O
https://paperswithcode.com/paper/deep-learning-in-medical-image-registration-a
Deep Learning in Medical Image Registration: A Survey
The establishment of image correspondence through robust image registration is critical to many clinical tasks such as image fusion, organ atlas creation, and tumor growth monitoring, and is a very challenging problem. Since the beginning of the recent deep learning renaissance, the medical imaging research community has developed deep learning based approaches and achieved the state-of-the-art in many applications, including image registration. The rapid adoption of deep learning for image registration applications over the past few years necessitates a comprehensive summary and outlook, which is the main scope of this survey. This requires placing a focus on the different research areas as well as highlighting challenges that practitioners face. This survey, therefore, outlines the evolution of deep learning based medical image registration in the context of both research challenges and relevant innovations in the past few years. Further, this survey highlights future research directions to show how this field may be possibly moved forward to the next level.
1903.02026
https://arxiv.org/abs/1903.02026v2
https://arxiv.org/pdf/1903.02026v2.pdf
[ "Image Registration", "Medical Image Registration" ]
[]
[]
-VfpZerpuX
https://paperswithcode.com/paper/real-time-convolutional-networks-for-sonar
Real-time convolutional networks for sonar image classification in low-power embedded systems
Deep Neural Networks have impressive classification performance, but this comes at the expense of significant computational resources at inference time. Autonomous Underwater Vehicles use low-power embedded systems for sonar image perception, and cannot execute large neural networks in real-time. We propose the use of max-pooling aggressively, and we demonstrate it with a Fire-based module and a new Tiny module that includes max-pooling in each module. By stacking them we build networks that achieve the same accuracy as bigger ones, while reducing the number of parameters and considerably increasing computational performance. Our networks can classify a 96x96 sonar image with 98.8 - 99.7 accuracy on only 41 to 61 milliseconds on a Raspberry Pi 2, which corresponds to speedups of 28.6 - 19.7.
1709.02153
http://arxiv.org/abs/1709.02153v1
http://arxiv.org/pdf/1709.02153v1.pdf
[ "Image Classification" ]
[]
[]
ZL1hvInatX
https://paperswithcode.com/paper/regional-based-query-in-graph-active-learning
Regional based query in graph active learning
Graph convolution networks (GCN) have emerged as the leading method to classify node classes in networks, and have reached the highest accuracy in multiple node classification tasks. In the absence of available tagged samples, active learning methods have been developed to obtain the highest accuracy using the minimal number of queries to an oracle. The current best active learning methods use the sample class uncertainty as selection criteria. However, in graph based classification, the class of each node is often related to the class of its neighbors. As such, the uncertainty in the class of a node's neighbor may be a more appropriate selection criterion. We here propose two such criteria, one extending the classical uncertainty measure, and the other extending the page-rank algorithm. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested this methods on graphs, such methods can be extended to any classification problem, where a distance metrics can be defined between the input samples. All the code used can be accessed at : https://github.com/louzounlab/graph-al All the datasets used can be accessed at : https://github.com/louzounlab/DataSets
1906.08541
https://arxiv.org/abs/1906.08541v1
https://arxiv.org/pdf/1906.08541v1.pdf
[ "Active Learning", "Node Classification" ]
[ "Convolution" ]
[]
bzawbztQ_v
https://paperswithcode.com/paper/simplifying-neural-networks-with-the-marabou
Simplifying Neural Networks using Formal Verification
Deep neural network (DNN) verification is an emerging field, with diverse verification engines quickly becoming available. Demonstrating the effectiveness of these engines on real-world DNNs is an important step towards their wider adoption. We present a tool that can leverage existing verification engines in performing a novel application: neural network simplification, through the reduction of the size of a DNN without harming its accuracy. We report on the work-flow of the simplification process, and demonstrate its potential significance and applicability on a family of real-world DNNs for aircraft collision avoidance, whose sizes we were able to reduce by as much as 10%.
1910.12396
https://arxiv.org/abs/1910.12396v2
https://arxiv.org/pdf/1910.12396v2.pdf
[]
[]
[]
GB_KMv8wL1
https://paperswithcode.com/paper/model-interpretation-a-unified-derivative
Model Interpretation: A Unified Derivative-based Framework for Nonparametric Regression and Supervised Machine Learning
Interpreting a nonparametric regression model with many predictors is known to be a challenging problem. There has been renewed interest in this topic due to the extensive use of machine learning algorithms and the difficulty in understanding and explaining their input-output relationships. This paper develops a unified framework using a derivative-based approach for existing tools in the literature, including the partial-dependence plots, marginal plots and accumulated effects plots. It proposes a new interpretation technique called the accumulated total derivative effects plot and demonstrates how its components can be used to develop extensive insights in complex regression models with correlated predictors. The techniques are illustrated through simulation results.
1808.07216
http://arxiv.org/abs/1808.07216v2
http://arxiv.org/pdf/1808.07216v2.pdf
[]
[]
[]
6Glz6YTBRc
https://paperswithcode.com/paper/robustness-and-risk-sensitivity-in-markov
Robustness and risk-sensitivity in Markov decision processes
We uncover relations between robust MDPs and risk-sensitive MDPs. The objective of a robust MDP is to minimize a function, such as the expectation of cumulative cost, for the worst case when the parameters have uncertainties. The objective of a risk-sensitive MDP is to minimize a risk measure of the cumulative cost when the parameters are known. We show that a risk-sensitive MDP of minimizing the expected exponential utility is equivalent to a robust MDP of minimizing the worst-case expectation with a penalty for the deviation of the uncertain parameters from their nominal values, which is measured with the Kullback-Leibler divergence. We also show that a risk-sensitive MDP of minimizing an iterated risk measure that is composed of certain coherent risk measures is equivalent to a robust MDP of minimizing the worst-case expectation when the possible deviations of uncertain parameters from their nominal values are characterized with a concave function.
null
http://papers.nips.cc/paper/4693-robustness-and-risk-sensitivity-in-markov-decision-processes
http://papers.nips.cc/paper/4693-robustness-and-risk-sensitivity-in-markov-decision-processes.pdf
[]
[]
[]
uZWhZBBl73
https://paperswithcode.com/paper/incorporating-word-and-subword-units-in
Incorporating Word and Subword Units in Unsupervised Machine Translation Using Language Model Rescoring
This paper describes CAiRE's submission to the unsupervised machine translation track of the WMT'19 news shared task from German to Czech. We leverage a phrase-based statistical machine translation (PBSMT) model and a pre-trained language model to combine word-level neural machine translation (NMT) and subword-level NMT models without using any parallel data. We propose to solve the morphological richness problem of languages by training byte-pair encoding (BPE) embeddings for German and Czech separately, and they are aligned using MUSE (Conneau et al., 2018). To ensure the fluency and consistency of translations, a rescoring mechanism is proposed that reuses the pre-trained language model to select the translation candidates generated through beam search. Moreover, a series of pre-processing and post-processing approaches are applied to improve the quality of final translations.
1908.05925
https://arxiv.org/abs/1908.05925v2
https://arxiv.org/pdf/1908.05925v2.pdf
[ "Language Modelling", "Machine Translation", "Unsupervised Machine Translation" ]
[]
[]
1WkxROkVJQ
https://paperswithcode.com/paper/191013406
Generalization of Reinforcement Learners with Working and Episodic Memory
Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite.
1910.13406
https://arxiv.org/abs/1910.13406v2
https://arxiv.org/pdf/1910.13406v2.pdf
[]
[]
[]
0fN9HbmzOH
https://paperswithcode.com/paper/on-policy-gradients
On Policy Gradients
The goal of policy gradient approaches is to find a policy in a given class of policies which maximizes the expected return. Given a differentiable model of the policy, we want to apply a gradient-ascent technique to reach a local optimum. We mainly use gradient ascent, because it is theoretically well researched. The main issue is that the policy gradient with respect to the expected return is not available, thus we need to estimate it. As policy gradient algorithms also tend to require on-policy data for the gradient estimate, their biggest weakness is sample efficiency. For this reason, most research is focused on finding algorithms with improved sample efficiency. This paper provides a formal introduction to policy gradient that shows the development of policy gradient approaches, and should enable the reader to follow current research on the topic.
1911.04817
https://arxiv.org/abs/1911.04817v1
https://arxiv.org/pdf/1911.04817v1.pdf
[]
[]
[]
tPd-ikjA2R
https://paperswithcode.com/paper/fiduciary-bandits
Fiduciary Bandits
Recommendation systems often face exploration-exploitation tradeoffs: the system can only learn about the desirability of new options by recommending them to some user. Such systems can thus be modeled as multi-armed bandit settings; however, users are self-interested and cannot be made to follow recommendations. We ask whether exploration can nevertheless be performed in a way that scrupulously respects agents' interests---i.e., by a system that acts as a fiduciary. More formally, we introduce a model in which a recommendation system faces an exploration-exploitation tradeoff under the constraint that it can never recommend any action that it knows yields lower reward in expectation than an agent would achieve if it acted alone. Our main contribution is a positive result: an asymptotically optimal, incentive compatible, and ex-ante individually rational recommendation algorithm.
1905.07043
https://arxiv.org/abs/1905.07043v3
https://arxiv.org/pdf/1905.07043v3.pdf
[ "Recommendation Systems" ]
[]
[]
pDRho8bg0L
https://paperswithcode.com/paper/matching-of-images-with-rotation
Matching of Images with Rotation Transformation Based on the Virtual Electromagnetic Interaction
A novel approach of image matching for rotating transformation is presented and studied. The approach is inspired by electromagnetic interaction force between physical currents. The virtual current in images is proposed based on the significant edge lines extracted as the fundamental structural feature of images. The virtual electromagnetic force and the corresponding moment is studied between two images after the extraction of the virtual currents in the images. Then image matching for rotating transformation is implemented by exploiting the interaction between the virtual currents in the two images to be matched. The experimental results prove the effectiveness of the novel idea, which indicates the promising application of the proposed method in image registration.
1610.02762
http://arxiv.org/abs/1610.02762v1
http://arxiv.org/pdf/1610.02762v1.pdf
[ "Image Registration" ]
[]
[]
woB0NqT019
https://paperswithcode.com/paper/general-e2-equivariant-steerable-cnns
General E(2)-Equivariant Steerable CNNs
The big empirical success of group equivariant networks has led in recent years to the sprouting of a great variety of equivariant network architectures. A particular focus has thereby been on rotation and reflection equivariant CNNs for planar images. Here we give a general description of E(2)-equivariant convolutions in the framework of Steerable CNNs. The theory of Steerable CNNs thereby yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces. We show that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. A general solution of the kernel space constraint is given for arbitrary representations of the Euclidean group E(2) and its subgroups. We implement a wide range of previously proposed and entirely new equivariant network architectures and extensively compare their performances. E(2)-steerable convolutions are further shown to yield remarkable gains on CIFAR-10, CIFAR-100 and STL-10 when used as drop in replacement for non-equivariant convolutions.
null
http://papers.nips.cc/paper/9580-general-e2-equivariant-steerable-cnns
http://papers.nips.cc/paper/9580-general-e2-equivariant-steerable-cnns.pdf
[]
[ "Convolution" ]
[]
I3WhMsAQlc
https://paperswithcode.com/paper/cascaded-generative-and-discriminative
Cascaded Generative and Discriminative Learning for Microcalcification Detection in Breast Mammograms
Accurate microcalcification (mC) detection is of great importance due to its high proportion in early breast cancers. Most of the previous mC detection methods belong to discriminative models, where classifiers are exploited to distinguish mCs from other backgrounds. However, it is still challenging for these methods to tell the mCs from amounts of normal tissues because they are too tiny (at most 14 pixels). Generative methods can precisely model the normal tissues and regard the abnormal ones as outliers, while they fail to further distinguish the mCs from other anomalies, i.e. vessel calcifications. In this paper, we propose a hybrid approach by taking advantages of both generative and discriminative models. Firstly, a generative model named Anomaly Separation Network (ASN) is used to generate candidate mCs. ASN contains two major components. A deep convolutional encoder-decoder network is built to learn the image reconstruction mapping and a t-test loss function is designed to separate the distributions of the reconstruction residuals of mCs from normal tissues. Secondly, a discriminative model is cascaded to tell the mCs from the false positives. Finally, to verify the effectiveness of our method, we conduct experiments on both public and in-house datasets, which demonstrates that our approach outperforms previous state-of-the-art methods.
null
http://openaccess.thecvf.com/content_CVPR_2019/html/Zhang_Cascaded_Generative_and_Discriminative_Learning_for_Microcalcification_Detection_in_Breast_CVPR_2019_paper.html
http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Cascaded_Generative_and_Discriminative_Learning_for_Microcalcification_Detection_in_Breast_CVPR_2019_paper.pdf
[ "Image Reconstruction" ]
[]
[]
N7pZkdGwcg
https://paperswithcode.com/paper/afra-argumentation-framework-with-recursive
AFRA: Argumentation framework with recursive attacks
The issue of representing attacks to attacks in argumentation is receiving an increasing attention as a useful conceptual modelling tool in several contexts. In this paper we present AFRA, a formalism encompassing unlimited recursive attacks within argumentation frameworks. AFRA satisfies the basic requirements of definition simplicity and rigorous compatibility with Dung's theory of argumentation. This paper provides a complete development of the AFRA formalism complemented by illustrative examples and a detailed comparison with other recursive attack formalizations.
1810.04886
http://arxiv.org/abs/1810.04886v1
http://arxiv.org/pdf/1810.04886v1.pdf
[]
[]
[]
PSkzB2Q9-1
https://paperswithcode.com/paper/individual-claims-forecasting-with-bayesian
Individual Claims Forecasting with Bayesian Mixture Density Networks
We introduce an individual claims forecasting framework utilizing Bayesian mixture density networks that can be used for claims analytics tasks such as case reserving and triaging. The proposed approach enables incorporating claims information from both structured and unstructured data sources, producing multi-period cash flow forecasts, and generating different scenarios of future payment patterns. We implement and evaluate the modeling framework using publicly available data.
2003.02453
https://arxiv.org/abs/2003.02453v1
https://arxiv.org/pdf/2003.02453v1.pdf
[]
[]
[]
uEcOhtoiGa
https://paperswithcode.com/paper/jointly-modeling-topics-and-intents-with
Jointly Modeling Topics and Intents with Global Order Structure
Modeling document structure is of great importance for discourse analysis and related applications. The goal of this research is to capture the document intent structure by modeling documents as a mixture of topic words and rhetorical words. While the topics are relatively unchanged through one document, the rhetorical functions of sentences usually change following certain orders in discourse. We propose GMM-LDA, a topic modeling based Bayesian unsupervised model, to analyze the document intent structure cooperated with order information. Our model is flexible that has the ability to combine the annotations and do supervised learning. Additionally, entropic regularization can be introduced to model the significant divergence between topics and intents. We perform experiments in both unsupervised and supervised settings, results show the superiority of our model over several state-of-the-art baselines.
1512.02009
http://arxiv.org/abs/1512.02009v1
http://arxiv.org/pdf/1512.02009v1.pdf
[]
[]
[]
i_EOHUVl2t
https://paperswithcode.com/paper/managing-large-scale-scientific-hypotheses-as
Managing large-scale scientific hypotheses as uncertain and probabilistic data
In view of the paradigm shift that makes science ever more data-driven, in this thesis we propose a synthesis method for encoding and managing large-scale deterministic scientific hypotheses as uncertain and probabilistic data. In the form of mathematical equations, hypotheses symmetrically relate aspects of the studied phenomena. For computing predictions, however, deterministic hypotheses can be abstracted as functions. We build upon Simon's notion of structural equations in order to efficiently extract the (so-called) causal ordering between variables, implicit in a hypothesis structure (set of mathematical equations). We show how to process the hypothesis predictive structure effectively through original algorithms for encoding it into a set of functional dependencies (fd's) and then performing causal reasoning in terms of acyclic pseudo-transitive reasoning over fd's. Such reasoning reveals important causal dependencies implicit in the hypothesis predictive data and guide our synthesis of a probabilistic database. Like in the field of graphical models in AI, such a probabilistic database should be normalized so that the uncertainty arisen from competing hypotheses is decomposed into factors and propagated properly onto predictive data by recovering its joint probability distribution through a lossless join. That is motivated as a design-theoretic principle for data-driven hypothesis management and predictive analytics. The method is applicable to both quantitative and qualitative deterministic hypotheses and demonstrated in realistic use cases from computational science.
1501.05290
http://arxiv.org/abs/1501.05290v2
http://arxiv.org/pdf/1501.05290v2.pdf
[]
[]
[]
-OhvBlr5TC
https://paperswithcode.com/paper/a-neutrosophic-recommender-system-for-medical
A Neutrosophic Recommender System for Medical Diagnosis Based on Algebraic Neutrosophic Measures
Neutrosophic set has the ability to handle uncertain, incomplete, inconsistent, indeterminate information in a more accurate way. In this paper, we proposed a neutrosophic recommender system to predict the diseases based on neutrosophic set which includes single-criterion neutrosophic recommender system (SC-NRS) and multi-criterion neutrosophic recommender system (MC-NRS). Further, we investigated some algebraic operations of neutrosophic recommender system such as union, complement, intersection, probabilistic sum, bold sum, bold intersection, bounded difference, symmetric difference, convex linear sum of min and max operators, Cartesian product, associativity, commutativity and distributive. Based on these operations, we studied the algebraic structures such as lattices, Kleen algebra, de Morgan algebra, Brouwerian algebra, BCK algebra, Stone algebra and MV algebra. In addition, we introduced several types of similarity measures based on these algebraic operations and studied some of their theoretic properties. Moreover, we accomplished a prediction formula using the proposed algebraic similarity measure. We also proposed a new algorithm for medical diagnosis based on neutrosophic recommender system. Finally to check the validity of the proposed methodology, we made experiments on the datasets Heart, RHC, Breast cancer, Diabetes and DMD. At the end, we presented the MSE and computational time by comparing the proposed algorithm with the relevant ones such as ICSM, DSM, CARE, CFMD, as well as other variants namely Variant 67, Variant 69, and Varian 71 both in tabular and graphical form to analyze the efficiency and accuracy. Finally we analyzed the strength of all 8 algorithms by ANOVA statistical tool.
1602.08447
http://arxiv.org/abs/1602.08447v1
http://arxiv.org/pdf/1602.08447v1.pdf
[ "Medical Diagnosis", "Recommendation Systems" ]
[]
[]
5b35CggATP
https://paperswithcode.com/paper/neural-program-synthesis-with-priority-queue
Neural Program Synthesis with Priority Queue Training
Models and examples built with TensorFlow
1801.03526
http://arxiv.org/abs/1801.03526v2
http://arxiv.org/pdf/1801.03526v2.pdf
[ "Program Synthesis" ]
[]
[]
oxUMZGmYpy
https://paperswithcode.com/paper/experimental-support-for-a-categorical
Experimental Support for a Categorical Compositional Distributional Model of Meaning
Modelling compositional meaning for sentences using empirical distributional methods has been a challenge for computational linguists. We implement the abstract categorical model of Coecke et al. (arXiv:1003.4394v1 [cs.CL]) using data from the BNC and evaluate it. The implementation is based on unsupervised learning of matrices for relational words and applying them to the vectors of their arguments. The evaluation is based on the word disambiguation task developed by Mitchell and Lapata (2008) for intransitive sentences, and on a similar new experiment designed for transitive sentences. Our model matches the results of its competitors in the first experiment, and betters them in the second. The general improvement in results with increase in syntactic complexity showcases the compositional power of our model.
1106.4058
https://arxiv.org/abs/1106.4058v1
https://arxiv.org/pdf/1106.4058v1.pdf
[]
[]
[]
S9obLUIqmC
https://paperswithcode.com/paper/adversarial-nli-a-new-benchmark-for-natural
Adversarial NLI: A New Benchmark for Natural Language Understanding
We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-the-art models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.
1910.14599
https://arxiv.org/abs/1910.14599v2
https://arxiv.org/pdf/1910.14599v2.pdf
[ "Natural Language Understanding" ]
[]
[]
DIQ-ED2T3A
https://paperswithcode.com/paper/sentiment-analysis-using-imperfect-views-from
Sentiment Analysis using Imperfect Views from Spoken Language and Acoustic Modalities
Multimodal sentiment classification in practical applications may have to rely on erroneous and imperfect views, namely (a) language transcription from a speech recognizer and (b) under-performing acoustic views. This work focuses on improving the representations of these views by performing a deep canonical correlation analysis with the representations of the better performing manual transcription view. Enhanced representations of the imperfect views can be obtained even in absence of the perfect views and give an improved performance during test conditions. Evaluations on the CMU-MOSI and CMU-MOSEI datasets demonstrate the effectiveness of the proposed approach.
null
https://www.aclweb.org/anthology/W18-3305/
https://www.aclweb.org/anthology/W18-3305
[ "Sentiment Analysis", "Speech Recognition" ]
[]
[]
eMHLi48Put
https://paperswithcode.com/paper/a-gaussian-mixture-mrf-for-model-based
A Gaussian Mixture MRF for Model-Based Iterative Reconstruction with Applications to Low-Dose X-ray CT
Markov random fields (MRFs) have been widely used as prior models in various inverse problems such as tomographic reconstruction. While MRFs provide a simple and often effective way to model the spatial dependencies in images, they suffer from the fact that parameter estimation is difficult. In practice, this means that MRFs typically have very simple structure that cannot completely capture the subtle characteristics of complex images. In this paper, we present a novel Gaussian mixture Markov random field model (GM-MRF) that can be used as a very expressive prior model for inverse problems such as denoising and reconstruction. The GM-MRF forms a global image model by merging together individual Gaussian-mixture models (GMMs) for image patches. In addition, we present a novel analytical framework for computing MAP estimates using the GM-MRF prior model through the construction of surrogate functions that result in a sequence of quadratic optimizations. We also introduce a simple but effective method to adjust the GM-MRF so as to control the sharpness in low- and high-contrast regions of the reconstruction separately. We demonstrate the value of the model with experiments including image denoising and low-dose CT reconstruction.
1605.04006
http://arxiv.org/abs/1605.04006v2
http://arxiv.org/pdf/1605.04006v2.pdf
[ "Denoising", "Image Denoising" ]
[]
[]
HjCvWorrjU
https://paperswithcode.com/paper/graph-based-offline-signature-verification
Graph-Based Offline Signature Verification
Graphs provide a powerful representation formalism that offers great promise to benefit tasks like handwritten signature verification. While most state-of-the-art approaches to signature verification rely on fixed-size representations, graphs are flexible in size and allow modeling local features as well as the global structure of the handwriting. In this article, we present two recent graph-based approaches to offline signature verification: keypoint graphs with approximated graph edit distance and inkball models. We provide a comprehensive description of the methods, propose improvements both in terms of computational time and accuracy, and report experimental results for four benchmark datasets. The proposed methods achieve top results for several benchmarks, highlighting the potential of graph-based signature verification.
1906.10401
https://arxiv.org/abs/1906.10401v1
https://arxiv.org/pdf/1906.10401v1.pdf
[]
[]
[]
ho9v_ZtehC
https://paperswithcode.com/paper/inverse-procedural-modeling-of-knitwear
Inverse Procedural Modeling of Knitwear
The analysis and modeling of cloth has received a lot of attention in recent years. While recent approaches are focused on woven cloth, we present a novel practical approach for the inference of more complex knitwear structures as well as the respective knitting instructions from only a single image without attached annotations. Knitwear is produced by repeating instances of the same pattern, consisting of grid-like arrangements of a small set of basic stitch types. Our framework addresses the identification and localization of the occurring stitch types, which is challenging due to huge appearance variations. The resulting coarsely localized stitch types are used to infer the underlying grid structure as well as for the extraction of the knitting instruction of pattern repeats, taking into account principles of Gestalt theory. Finally, the derived instructions allow the reproduction of the knitting structures, either as renderings or by actual knitting, as demonstrated in several examples.
null
http://openaccess.thecvf.com/content_CVPR_2019/html/Trunz_Inverse_Procedural_Modeling_of_Knitwear_CVPR_2019_paper.html
http://openaccess.thecvf.com/content_CVPR_2019/papers/Trunz_Inverse_Procedural_Modeling_of_Knitwear_CVPR_2019_paper.pdf
[]
[]
[]
G1lo1vGEBp
https://paperswithcode.com/paper/depth-fields-extending-light-field-techniques
Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging
A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.
1509.00816
http://arxiv.org/abs/1509.00816v1
http://arxiv.org/pdf/1509.00816v1.pdf
[]
[]
[]
ZHBS8lBvJY
https://paperswithcode.com/paper/rewriting-history-with-inverse-rl-hindsight
Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
Multi-task reinforcement learning (RL) aims to simultaneously learn policies for solving many tasks. Several prior works have found that relabeling past experience with different reward functions can improve sample efficiency. Relabeling methods typically ask: if, in hindsight, we assume that our experience was optimal for some task, for what task was it optimal? In this paper, we show that hindsight relabeling is inverse RL, an observation that suggests that we can use inverse RL in tandem for RL algorithms to efficiently solve many tasks. We use this idea to generalize goal-relabeling techniques from prior work to arbitrary classes of tasks. Our experiments confirm that relabeling data using inverse RL accelerates learning in general multi-task settings, including goal-reaching, domains with discrete sets of rewards, and those with linear reward functions.
2002.11089
https://arxiv.org/abs/2002.11089v1
https://arxiv.org/pdf/2002.11089v1.pdf
[]
[]
[]
B9EeRGm44y
https://paperswithcode.com/paper/evaluating-layers-of-representation-in-neural
Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks
While neural machine translation (NMT) models provide improved translation quality in an elegant, end-to-end framework, it is less clear what they learn about language. Recent work has started evaluating the quality of vector representations learned by NMT models on morphological and syntactic tasks. In this paper, we investigate the representations learned at different layers of NMT encoders. We train NMT systems on parallel data and use the trained models to extract features for training a classifier on two tasks: part-of-speech and semantic tagging. We then measure the performance of the classifier as a proxy to the quality of the original NMT model for the given task. Our quantitative analysis yields interesting insights regarding representation learning in NMT models. For instance, we find that higher layers are better at learning semantics while lower layers tend to be better for part-of-speech tagging. We also observe little effect of the target language on source-side representations, especially with higher quality NMT models.
1801.07772
http://arxiv.org/abs/1801.07772v1
http://arxiv.org/pdf/1801.07772v1.pdf
[ "Machine Translation", "Part-Of-Speech Tagging", "Representation Learning" ]
[]
[]
vwqx5oHhBh
https://paperswithcode.com/paper/visual-recognition-by-learning-from-web-data
Visual Recognition by Learning From Web Data: A Weakly Supervised Domain Generalization Approach
In this work, we formulate a new weakly supervised domain generalization problem for the visual recognition task by using loosely labeled web images/videos as training data. Specifically, we aim to address two challenging issues when learning robust classifiers: 1) enhancing the generalization capability of the learnt classifiers to any unseen target domain; and 2) coping with noise in the labels of training web images/videos in the source domain. To address the first issue, we assume the training web images/videos may come from multiple hidden domains with different data distributions. We then extend the multi-class SVM formulation to learn one classifier for each class and each latent domain such that multiple classifiers from each class can be effectively integrated to achieve better generalization capability. To address the second issue, we partition the training samples in each class into multiple clusters. By treating each cluster as a "bag" and the samples in each cluster as "instances", we formulate a new multi-instance learning (MIL) problem for domain generalization by selecting a subset of training samples from each training bag and simultaneously learning the optimal classifiers based on the selected samples. Moreover, we also extend our newly proposed Weakly Supervised Domain Generalization (WSDG) approach by taking advantage of the additional textual descriptions that are only available in the training web images/videos as privileged information. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our new approaches for visual recognition by learning from web data.
null
http://openaccess.thecvf.com/content_cvpr_2015/html/Niu_Visual_Recognition_by_2015_CVPR_paper.html
http://openaccess.thecvf.com/content_cvpr_2015/papers/Niu_Visual_Recognition_by_2015_CVPR_paper.pdf
[ "Domain Generalization" ]
[]
[]
j1yCtnFnqF
https://paperswithcode.com/paper/sparse-coding-of-shape-trajectories-for
Sparse Coding of Shape Trajectories for Facial Expression and Action Recognition
The detection and tracking of human landmarks in video streams has gained in reliability partly due to the availability of affordable RGB-D sensors. The analysis of such time-varying geometric data is playing an important role in the automatic human behavior understanding. However, suitable shape representations as well as their temporal evolution, termed trajectories, often lie to nonlinear manifolds. This puts an additional constraint (i.e., nonlinearity) in using conventional Machine Learning techniques. As a solution, this paper accommodates the well-known Sparse Coding and Dictionary Learning approach to study time-varying shapes on the Kendall shape spaces of 2D and 3D landmarks. We illustrate effective coding of 3D skeletal sequences for action recognition and 2D facial landmark sequences for macro- and micro-expression recognition. To overcome the inherent nonlinearity of the shape spaces, intrinsic and extrinsic solutions were explored. As main results, shape trajectories give rise to more discriminative time-series with suitable computational properties, including sparsity and vector space structure. Extensive experiments conducted on commonly-used datasets demonstrate the competitiveness of the proposed approaches with respect to state-of-the-art.
1908.03231
https://arxiv.org/abs/1908.03231v1
https://arxiv.org/pdf/1908.03231v1.pdf
[ "Action Recognition", "Dictionary Learning", "Micro-Expression Recognition", "Time Series" ]
[]
[]
WJsW2lPESg
https://paperswithcode.com/paper/colour-terms-a-categorisation-model-inspired
Colour Terms: a Categorisation Model Inspired by Visual Cortex Neurons
Although it seems counter-intuitive, categorical colours do not exist as external physical entities but are very much the product of our brains. Our cortical machinery segments the world and associate objects to specific colour terms, which is not only convenient for communication but also increases the efficiency of visual processing by reducing the dimensionality of input scenes. Although the neural substrate for this phenomenon is unknown, a recent study of cortical colour processing has discovered a set of neurons that are isoresponsive to stimuli in the shape of 3D-ellipsoidal surfaces in colour-opponent space. We hypothesise that these neurons might help explain the underlying mechanisms of colour naming in the visual cortex. Following this, we propose a biologically-inspired colour naming model where each colour term - e.g. red, green, blue, yellow, etc. - is represented through an ellipsoid in 3D colour-opponent space. This paradigm is also supported by previous psychophysical colour categorisation experiments whose results resemble such shapes. "Belongingness" of each pixel to different colour categories is computed by a non-linear sigmoidal logistic function. The final colour term for a given pixel is calculated by a maximum pooling mechanism. The simplicity of our model allows its parameters to be learnt from a handful of segmented images. It also offers a straightforward extension to include further colour terms. Additionally, ellipsoids of proposed model can adapt to image contents offering a dynamical solution in order to address phenomenon of colour constancy. Our results on the Munsell chart and two datasets of real-world images show an overall improvement comparing to state-of-the-art algorithms.
1709.06300
http://arxiv.org/abs/1709.06300v1
http://arxiv.org/pdf/1709.06300v1.pdf
[]
[]
[]
s9mqdxw4Hk
https://paperswithcode.com/paper/learning-regularization-and-intensity
Learning regularization and intensity-gradient-based fidelity for single image super resolution
How to extract more and useful information for single image super resolution is an imperative and difficult problem. Learning-based method is a representative method for such task. However, the results are not so stable as there may exist big difference between the training data and the test data. The regularization-based method can effectively utilize the self-information of observation. However, the degradation model used in regularization-based method just considers the degradation in intensity space. It may not reconstruct images well as the degradation reflections in various feature space are not considered. In this paper, we first study the image degradation progress, and establish degradation model both in intensity and gradient space. Thus, a comprehensive data consistency constraint is established for the reconstruction. Consequently, more useful information can be extracted from the observed data. Second, the regularization term is learned by a designed symmetric residual deep neural-network. It can search similar external information from a predefined dataset avoiding the artificial tendency. Finally, the proposed fidelity term and designed regularization term are embedded into the regularization framework. Further, an optimization method is developed based on the half-quadratic splitting method and the pseudo conjugate method. Experimental results indicated that the subjective and the objective metric corresponding to the proposed method were better than those obtained by the comparison methods.
2003.10689
https://arxiv.org/abs/2003.10689v1
https://arxiv.org/pdf/2003.10689v1.pdf
[ "Image Super-Resolution", "Super Resolution", "Super-Resolution" ]
[]
[]
uZTWAmq47k
https://paperswithcode.com/paper/multi-class-classification-from-noisy
Multi-Class Classification from Noisy-Similarity-Labeled Data
A similarity label indicates whether two instances belong to the same class while a class label shows the class of the instance. Without class labels, a multi-class classifier could be learned from similarity-labeled pairwise data by meta classification learning. However, since the similarity label is less informative than the class label, it is more likely to be noisy. Deep neural networks can easily remember noisy data, leading to overfitting in classification. In this paper, we propose a method for learning from only noisy-similarity-labeled data. Specifically, to model the noise, we employ a noise transition matrix to bridge the class-posterior probability between clean and noisy data. We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances. Moreover, we theoretically justify how our proposed method generalizes for learning classifiers. Experimental results demonstrate the superiority of the proposed method over the state-of-the-art method on benchmark-simulated and real-world noisy-label datasets.
2002.06508
https://arxiv.org/abs/2002.06508v1
https://arxiv.org/pdf/2002.06508v1.pdf
[ "Multi-class Classification" ]
[]
[]
GwpVB58Bkz
https://paperswithcode.com/paper/deceased-organ-matching-in-australia
Deceased Organ Matching in Australia
Despite efforts to increase the supply of organs from living donors, most kidney transplants performed in Australia still come from deceased donors. The age of these donated organs has increased substantially in recent decades as the rate of fatal accidents on roads has fallen. The Organ and Tissue Authority in Australia is therefore looking to design a new mechanism that better matches the age of the organ to the age of the patient. I discuss the design, axiomatics and performance of several candidate mechanisms that respect the special online nature of this fair division problem.
1710.06636
http://arxiv.org/abs/1710.06636v1
http://arxiv.org/pdf/1710.06636v1.pdf
[]
[]
[]
jVkcGVKj5i
https://paperswithcode.com/paper/captioning-images-with-novel-objects-via
Captioning Images with Novel Objects via Online Vocabulary Expansion
In this study, we introduce a low cost method for generating descriptions from images containing novel objects. Generally, constructing a model, which can explain images with novel objects, is costly because of the following: (1) collecting a large amount of data for each category, and (2) retraining the entire system. If humans see a small number of novel objects, they are able to estimate their properties by associating their appearance with known objects. Accordingly, we propose a method that can explain images with novel objects without retraining using the word embeddings of the objects estimated from only a small number of image features of the objects. The method can be integrated with general image-captioning models. The experimental results show the effectiveness of our approach.
2003.03305
https://arxiv.org/abs/2003.03305v1
https://arxiv.org/pdf/2003.03305v1.pdf
[ "Image Captioning", "Word Embeddings" ]
[]
[]
m6kN5yIOhq
https://paperswithcode.com/paper/semeval-2019-task-6-identifying-and
Towards NLP with Deep Learning: Convolutional Neural Networks and Recurrent Neural Networks for Offensive Language Identification in Social Media
This short paper presents the design decisions taken and challenges encountered in completing SemEval Task 6, which poses the problem of identifying and categorizing offensive language in tweets. Our proposed solutions explore Deep Learning techniques, Linear Support Vector classification and Random Forests to identify offensive tweets, to classify offenses as targeted or untargeted and eventually to identify the target subject type.
1903.00665
http://arxiv.org/abs/1903.00665v2
http://arxiv.org/pdf/1903.00665v2.pdf
[ "Language Identification" ]
[]
[]
Asy_U3HVap
https://paperswithcode.com/paper/uriel-and-lang2vec-representing-languages-as
URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors
We introduce the URIEL knowledge base for massively multilingual NLP and the lang2vec utility, which provides information-rich vector identifications of languages drawn from typological, geographical, and phylogenetic databases and normalized to have straightforward and consistent formats, naming, and semantics. The goal of URIEL and lang2vec is to enable multilingual NLP, especially on less-resourced languages and make possible types of experiments (especially but not exclusively related to NLP tasks) that are otherwise difficult or impossible due to the sparsity and incommensurability of the data sources. lang2vec vectors have been shown to reduce perplexity in multilingual language modeling, when compared to one-hot language identification vectors.
null
https://www.aclweb.org/anthology/E17-2002/
https://www.aclweb.org/anthology/E17-2002
[ "Language Identification", "Language Modelling" ]
[]
[]
Rww88izlhl
https://paperswithcode.com/paper/drone-squadron-optimization-a-self-adaptive
Drone Squadron Optimization: a Self-adaptive Algorithm for Global Numerical Optimization
This paper proposes Drone Squadron Optimization, a new self-adaptive metaheuristic for global numerical optimization which is updated online by a hyper-heuristic. DSO is an artifact-inspired technique, as opposed to many algorithms used nowadays, which are nature-inspired. DSO is very flexible because it is not related to behaviors or natural phenomena. DSO has two core parts: the semi-autonomous drones that fly over a landscape to explore, and the Command Center that processes the retrieved data and updates the drones' firmware whenever necessary. The self-adaptive aspect of DSO in this work is the perturbation/movement scheme, which is the procedure used to generate target coordinates. This procedure is evolved by the Command Center during the global optimization process in order to adapt DSO to the search landscape. DSO was evaluated on a set of widely employed benchmark functions. The statistical analysis of the results shows that the proposed method is competitive with the other methods in the comparison, the performance is promising, but several future improvements are planned.
1703.04561
http://arxiv.org/abs/1703.04561v1
http://arxiv.org/pdf/1703.04561v1.pdf
[]
[]
[]
uh8mG6rEWC
https://paperswithcode.com/paper/representing-verbs-as-argument-concepts
Representing Verbs as Argument Concepts
Verbs play an important role in the understanding of natural language text. This paper studies the problem of abstracting the subject and object arguments of a verb into a set of noun concepts, known as the "argument concepts". This set of concepts, whose size is parameterized, represents the fine-grained semantics of a verb. For example, the object of "enjoy" can be abstracted into time, hobby and event, etc. We present a novel framework to automatically infer human readable and machine computable action concepts with high accuracy.
1803.00729
http://arxiv.org/abs/1803.00729v1
http://arxiv.org/pdf/1803.00729v1.pdf
[]
[]
[]
cb2hntZCNZ
https://paperswithcode.com/paper/hotel-recommendation-system
Hotel Recommendation System
One of the first things to do while planning a trip is to book a good place to stay. Booking a hotel online can be an overwhelming task with thousands of hotels to choose from, for every destination. Motivated by the importance of these situations, we decided to work on the task of recommending hotels to users. We used Expedia's hotel recommendation dataset, which has a variety of features that helped us achieve a deep understanding of the process that makes a user choose certain hotels over others. The aim of this hotel recommendation task is to predict and recommend five hotel clusters to a user that he/she is more likely to book given hundred distinct clusters.
1908.07498
https://arxiv.org/abs/1908.07498v2
https://arxiv.org/pdf/1908.07498v2.pdf
[]
[]
[]
mwlu59RAYg
https://paperswithcode.com/paper/investigating-engagement-intercultural-and
Investigating Engagement - intercultural and technological aspects of the collection, analysis, and use of the Estonian Multiparty Conversational video data
In this paper we describe the goals of the Estonian corpus collection and analysis activities, and introduce the recent collection of Estonian First Encounters data. The MINT project aims at deepening our understanding of the conversational properties and practices in human interactions. We especially investigate conversational engagement and cooperation, and discuss some observations on the participants' views concerning the interaction they have been engaged.
null
https://www.aclweb.org/anthology/L12-1001/
http://www.lrec-conf.org/proceedings/lrec2012/pdf/106_Paper.pdf
[]
[]
[]
jD-EHaQJbj
https://paperswithcode.com/paper/adaptive-mcmc-based-inference-in
Adaptive MCMC-Based Inference in Probabilistic Logic Programs
Probabilistic Logic Programming (PLP) languages enable programmers to specify systems that combine logical models with statistical knowledge. The inference problem, to determine the probability of query answers in PLP, is intractable in general, thereby motivating the need for approximate techniques. In this paper, we present a technique for approximate inference of conditional probabilities for PLP queries. It is an Adaptive Markov Chain Monte Carlo (MCMC) technique, where the distribution from which samples are drawn is modified as the Markov Chain is explored. In particular, the distribution is progressively modified to increase the likelihood that a generated sample is consistent with evidence. In our context, each sample is uniquely characterized by the outcomes of a set of random variables. Inspired by reinforcement learning, our technique propagates rewards to random variable/outcome pairs used in a sample based on whether the sample was consistent or not. The cumulative rewards of each outcome is used to derive a new "adapted distribution" for each random variable. For a sequence of samples, the distributions are progressively adapted after each sample. For a query with "Markovian evaluation structure", we show that the adapted distribution of samples converges to the query's conditional probability distribution. For Markovian queries, we present a modified adaptation process that can be used in adaptive MCMC as well as adaptive independent sampling. We empirically evaluate the effectiveness of the adaptive sampling methods for queries with and without Markovian evaluation structure.
1403.6036
http://arxiv.org/abs/1403.6036v1
http://arxiv.org/pdf/1403.6036v1.pdf
[]
[]
[]
XOuekPApar
https://paperswithcode.com/paper/towards-a-general-purpose-belief-maintenance
Towards a General-Purpose Belief Maintenance System
There currently exists a gap between the theories proposed by the probability and uncertainty and the needs of Artificial Intelligence research. These theories primarily address the needs of expert systems, using knowledge structures which must be pre-compiled and remain static in structure during runtime. Many Al systems require the ability to dynamically add and remove parts of the current knowledge structure (e.g., in order to examine what the world would be like for different causal theories). This requires more flexibility than existing uncertainty systems display. In addition, many Al researchers are only interested in using "probabilities" as a means of obtaining an ordering, rather than attempting to derive an accurate probabilistic account of a situation. This indicates the need for systems which stress ease of use and don't require extensive probability information when one cannot (or doesn't wish to) provide such information. This paper attempts to help reconcile the gap between approaches to uncertainty and the needs of many AI systems by examining the control issues which arise, independent of a particular uncertainty calculus. when one tries to satisfy these needs. Truth Maintenance Systems have been used extensively in problem solving tasks to help organize a set of facts and detect inconsistencies in the believed state of the world. These systems maintain a set of true/false propositions and their associated dependencies. However, situations often arise in which we are unsure of certain facts or in which the conclusions we can draw from available information are somewhat uncertain. The non-monotonic TMS 12] was an attempt at reasoning when all the facts are not known, but it fails to take into account degrees of belief and how available evidence can combine to strengthen a particular belief. This paper addresses the problem of probabilistic reasoning as it applies to Truth Maintenance Systems. It describes a belief Maintenance System that manages a current set of beliefs in much the same way that a TMS manages a set of true/false propositions. If the system knows that belief in fact is dependent in some way upon belief in fact2, then it automatically modifies its belief in facts when new information causes a change in belief of fact2. It models the behavior of a TMS, replacing its 3-valued logic (true, false, unknown) with an infinite valued logic, in such a way as to reduce to a standard TMS if all statements are given in absolute true/false terms. Belief Maintenance Systems can, therefore, be thought of as a generalization of Truth Maintenance Systems, whose possible reasoning tasks are a superset of those for a TMS.
1304.3084
http://arxiv.org/abs/1304.3084v1
http://arxiv.org/pdf/1304.3084v1.pdf
[]
[]
[]
fisqpg-Xli
https://paperswithcode.com/paper/outlying-property-detection-with-numerical
Outlying Property Detection with Numerical Attributes
The outlying property detection problem is the problem of discovering the properties distinguishing a given object, known in advance to be an outlier in a database, from the other database objects. In this paper, we analyze the problem within a context where numerical attributes are taken into account, which represents a relevant case left open in the literature. We introduce a measure to quantify the degree the outlierness of an object, which is associated with the relative likelihood of the value, compared to the to the relative likelihood of other objects in the database. As a major contribution, we present an efficient algorithm to compute the outlierness relative to significant subsets of the data. The latter subsets are characterized in a "rule-based" fashion, and hence the basis for the underlying explanation of the outlierness.
1306.3558
http://arxiv.org/abs/1306.3558v1
http://arxiv.org/pdf/1306.3558v1.pdf
[]
[]
[]
7cfXTaUyGr
https://paperswithcode.com/paper/approximating-map-by-compensating-for
Approximating MAP by Compensating for Structural Relaxations
We introduce a new perspective on approximations to the maximum a posteriori (MAP) task in probabilistic graphical models, that is based on simplifying a given instance, and then tightening the approximation. First, we start with a structural relaxation of the original model. We then infer from the relaxation its deficiencies, and compensate for them. This perspective allows us to identify two distinct classes of approximations. First, we find that max-product belief propagation can be viewed as a way to compensate for a relaxation, based on a particular idealized case for exactness. We identify a second approach to compensation that is based on a more refined idealized case, resulting in a new approximation with distinct properties. We go on to propose a new class of algorithms that, starting with a relaxation, iteratively yields tighter approximations.
null
http://papers.nips.cc/paper/3768-approximating-map-by-compensating-for-structural-relaxations
http://papers.nips.cc/paper/3768-approximating-map-by-compensating-for-structural-relaxations.pdf
[]
[]
[]
wb_dWBr6S1
https://paperswithcode.com/paper/incoherence-optimal-matrix-completion
Incoherence-Optimal Matrix Completion
This paper considers the matrix completion problem. We show that it is not necessary to assume joint incoherence, which is a standard but unintuitive and restrictive condition that is imposed by previous studies. This leads to a sample complexity bound that is order-wise optimal with respect to the incoherence parameter (as well as to the rank $r$ and the matrix dimension $n$ up to a log factor). As a consequence, we improve the sample complexity of recovering a semidefinite matrix from $O(nr^{2}\log^{2}n)$ to $O(nr\log^{2}n)$, and the highest allowable rank from $\Theta(\sqrt{n}/\log n)$ to $\Theta(n/\log^{2}n)$. The key step in proof is to obtain new bounds on the $\ell_{\infty,2}$-norm, defined as the maximum of the row and column norms of a matrix. To illustrate the applicability of our techniques, we discuss extensions to SVD projection, structured matrix completion and semi-supervised clustering, for which we provide order-wise improvements over existing results. Finally, we turn to the closely-related problem of low-rank-plus-sparse matrix decomposition. We show that the joint incoherence condition is unavoidable here for polynomial-time algorithms conditioned on the Planted Clique conjecture. This means it is intractable in general to separate a rank-$\omega(\sqrt{n})$ positive semidefinite matrix and a sparse matrix. Interestingly, our results show that the standard and joint incoherence conditions are associated respectively with the information (statistical) and computational aspects of the matrix decomposition problem.
1310.0154
http://arxiv.org/abs/1310.0154v4
http://arxiv.org/pdf/1310.0154v4.pdf
[ "Matrix Completion" ]
[]
[]
GNA32bUKW8
https://paperswithcode.com/paper/low-cost-measurement-of-industrial-shock
Low-cost Measurement of Industrial Shock Signals via Deep Learning Calibration
Special high-end sensors with expensive hardware are usually needed to measure shock signals with high accuracy. In this paper, we show that cheap low-end sensors calibrated by deep neural networks are also capable to measure high-g shocks accurately. Firstly we perform drop shock tests to collect a dataset of shock signals measured by sensors of different fidelity. Secondly, we propose a novel network to effectively learn both the signal peak and overall shape. The results show that the proposed network is capable to map low-end shock signals to its high-end counterparts with satisfactory accuracy. To the best of our knowledge, this is the first work to apply deep learning techniques to calibrate shock sensors.
1902.02829
http://arxiv.org/abs/1902.02829v1
http://arxiv.org/pdf/1902.02829v1.pdf
[]
[]
[]
4GpzO6n_JW
https://paperswithcode.com/paper/scheduled-restart-momentum-for-accelerated
Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
Stochastic gradient descent (SGD) with constant momentum and its variants such as Adam are the optimization algorithms of choice for training deep neural networks (DNNs). Since DNN training is incredibly computationally expensive, there is great interest in speeding up the convergence. Nesterov accelerated gradient (NAG) improves the convergence rate of gradient descent (GD) for convex optimization using a specially designed momentum; however, it accumulates error when an inexact gradient is used (such as in SGD), slowing convergence at best and diverging at worst. In this paper, we propose Scheduled Restart SGD (SRSGD), a new NAG-style scheme for training DNNs. SRSGD replaces the constant momentum in SGD by the increasing momentum in NAG but stabilizes the iterations by resetting the momentum to zero according to a schedule. Using a variety of models and benchmarks for image classification, we demonstrate that, in training DNNs, SRSGD significantly improves convergence and generalization; for instance in training ResNet200 for ImageNet classification, SRSGD achieves an error rate of 20.93% vs. the benchmark of 22.13%. These improvements become more significant as the network grows deeper. Furthermore, on both CIFAR and ImageNet, SRSGD reaches similar or even better error rates with significantly fewer training epochs compared to the SGD baseline.
2002.10583
https://arxiv.org/abs/2002.10583v2
https://arxiv.org/pdf/2002.10583v2.pdf
[ "Image Classification" ]
[ "Nesterov Accelerated Gradient", "Adam", "SGD" ]
[]
Yll_v0ViBf
https://paperswithcode.com/paper/on-the-prediction-performance-of-the-lasso
On the Prediction Performance of the Lasso
Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood. In this paper, we give new insights into this relationship in the context of multiple linear regression. We show, in particular, that the incorporation of a simple correlation measure into the tuning parameter can lead to a nearly optimal prediction performance of the Lasso even for highly correlated covariates. However, we also reveal that for moderately correlated covariates, the prediction performance of the Lasso can be mediocre irrespective of the choice of the tuning parameter. We finally show that our results also lead to near-optimal rates for the least-squares estimator with total variation penalty.
1402.1700
http://arxiv.org/abs/1402.1700v2
http://arxiv.org/pdf/1402.1700v2.pdf
[]
[]
[]
qM5lq-rl3a
https://paperswithcode.com/paper/robust-point-set-registration-using-gaussian
Robust Point Set Registration Using Gaussian Mixture Models
In this paper, we present a unified framework for the rigid and nonrigid point set registration problem in the presence of significant amounts of noise and outliers. The key idea of this registration framework is to represent the input point sets using Gaussian mixture models. Then, the problem of point set registration is reformulated as the problem of aligning two Gaussian mixtures such that a statistical discrepancy measure between the two corresponding mixtures is minimized. We show that the popular iterative closest point (ICP) method and several existing point set registration methods in the field are closely related and can be reinterpreted meaningfully in our general framework. Our instantiation of this general framework is based on the the L2 distance between two Gaussian mixtures, which has the closed-form expression and in turn leads to a computationally efficient registration algorithm. The resulting registration algorithm exhibits inherent statistical robustness, has an intuitive interpretation, and is simple to implement. We also provide theoretical and experimental comparisons with other robust methods for point set registration.
null
https://ieeexplore.ieee.org/document/5674050
https://github.com/bing-jian/gmmreg/blob/master/gmmreg_PAMI_preprint.pdf
[ "3D Point Cloud Matching", "Point Cloud Registration" ]
[]
[]
XbE7xk33Te
https://paperswithcode.com/paper/biased-aggregation-rollout-and-enhanced
Biased Aggregation, Rollout, and Enhanced Policy Improvement for Reinforcement Learning
We propose a new aggregation framework for approximate dynamic programming, which provides a connection with rollout algorithms, approximate policy iteration, and other single and multistep lookahead methods. The central novel characteristic is the use of a bias function $V$ of the state, which biases the values of the aggregate cost function towards their correct levels. The classical aggregation framework is obtained when $V\equiv0$, but our scheme works best when $V$ is a known reasonably good approximation to the optimal cost function $J^*$. When $V$ is equal to the cost function $J_{\mu}$ of some known policy $\mu$ and there is only one aggregate state, our scheme is equivalent to the rollout algorithm based on $\mu$ (i.e., the result of a single policy improvement starting with the policy $\mu$). When $V=J_{\mu}$ and there are multiple aggregate states, our aggregation approach can be used as a more powerful form of improvement of $\mu$. Thus, when combined with an approximate policy evaluation scheme, our approach can form the basis for a new and enhanced form of approximate policy iteration. When $V$ is a generic bias function, our scheme is equivalent to approximation in value space with lookahead function equal to $V$ plus a local correction within each aggregate state. The local correction levels are obtained by solving a low-dimensional aggregate DP problem, yielding an arbitrarily close approximation to $J^*$, when the number of aggregate states is sufficiently large. Except for the bias function, the aggregate DP problem is similar to the one of the classical aggregation framework, and its algorithmic solution by simulation or other methods is nearly identical to one for classical aggregation, assuming values of $V$ are available when needed.
1910.02426
https://arxiv.org/abs/1910.02426v1
https://arxiv.org/pdf/1910.02426v1.pdf
[]
[]
[]
rqwbun4X08
https://paperswithcode.com/paper/training-recurrent-neural-networks-via
Training Recurrent Neural Networks via Dynamical Trajectory-Based Optimization
This paper introduces a new method to train recurrent neural networks using dynamical trajectory-based optimization. The optimization method utilizes a projected gradient system (PGS) and a quotient gradient system (QGS) to determine the feasible regions of an optimization problem and search the feasible regions for local minima. By exploring the feasible regions, local minima are identified and the local minimum with the lowest cost is chosen as the global minimum of the optimization problem. Lyapunov theory is used to prove the stability of the local minima and their stability in the presence of measurement errors. Numerical examples show that the new approach provides better results than genetic algorithm and error backpropagation (EBP) trained networks.
1805.04152
http://arxiv.org/abs/1805.04152v1
http://arxiv.org/pdf/1805.04152v1.pdf
[]
[]
[]
j4CTxadikm
https://paperswithcode.com/paper/investigating-human-priors-for-playing-video
Investigating Human Priors for Playing Video Games
What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors on human performance. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play. Videos and the game manipulations are available at https://rach0012.github.io/humanRL_website/
1802.10217
http://arxiv.org/abs/1802.10217v3
http://arxiv.org/pdf/1802.10217v3.pdf
[ "Decision Making" ]
[]
[]