paper_id
stringlengths
10
10
paper_url
stringlengths
37
80
title
stringlengths
4
518
abstract
stringlengths
3
7.27k
arxiv_id
stringlengths
9
16
url_abs
stringlengths
18
601
url_pdf
stringlengths
21
601
aspect_tasks
sequence
aspect_methods
sequence
aspect_datasets
sequence
JcLcErsT3A
https://paperswithcode.com/paper/unsupervised-topic-modeling-approaches-to
Unsupervised Topic Modeling Approaches to Decision Summarization in Spoken Meetings
We present a token-level decision summarization framework that utilizes the latent topic structures of utterances to identify "summary-worthy" words. Concretely, a series of unsupervised topic models is explored and experimental results show that fine-grained topic models, which discover topics at the utterance-level rather than the document-level, can better identify the gist of the decision-making process. Moreover, our proposed token-level summarization approach, which is able to remove redundancies within utterances, outperforms existing utterance ranking based summarization methods. Finally, context information is also investigated to add additional relevant information to the summary.
1606.07829
http://arxiv.org/abs/1606.07829v1
http://arxiv.org/pdf/1606.07829v1.pdf
[ "Decision Making", "Topic Models" ]
[]
[]
XaIuQeCcrU
https://paperswithcode.com/paper/neural-inverse-rendering-for-general
Neural Inverse Rendering for General Reflectance Photometric Stereo
We present a novel convolutional neural network architecture for photometric stereo (Woodham, 1980), a problem of recovering 3D object surface normals from multiple images observed under varying illuminations. Despite its long history in computer vision, the problem still shows fundamental challenges for surfaces with unknown general reflectance properties (BRDFs). Leveraging deep neural networks to learn complicated reflectance models is promising, but studies in this direction are very limited due to difficulties in acquiring accurate ground truth for training and also in designing networks invariant to permutation of input images. In order to address these challenges, we propose a physics based unsupervised learning framework where surface normals and BRDFs are predicted by the network and fed into the rendering equation to synthesize observed images. The network weights are optimized during testing by minimizing reconstruction loss between observed and synthesized images. Thus, our learning process does not require ground truth normals or even pre-training on external images. Our method is shown to achieve the state-of-the-art performance on a challenging real-world scene benchmark.
1802.10328
http://arxiv.org/abs/1802.10328v2
http://arxiv.org/pdf/1802.10328v2.pdf
[]
[]
[]
yW5Kx_2y7v
https://paperswithcode.com/paper/properties-of-phoneme-n-grams-across-the
Properties of phoneme N -grams across the world's language families
In this article, we investigate the properties of phoneme N-grams across half of the world's languages. We investigate if the sizes of three different N-gram distributions of the world's language families obey a power law. Further, the N-gram distributions of language families parallel the sizes of the families, which seem to obey a power law distribution. The correlation between N-gram distributions and language family sizes improves with increasing values of N. We applied statistical tests, originally given by physicists, to test the hypothesis of power law fit to twelve different datasets. The study also raises some new questions about the use of N-gram distributions in linguistic research, which we answer by running a statistical test.
1401.0794
http://arxiv.org/abs/1401.0794v1
http://arxiv.org/pdf/1401.0794v1.pdf
[]
[]
[]
ejM4XgukeC
https://paperswithcode.com/paper/deep-reinforcement-learning-with-relational
Deep reinforcement learning with relational inductive biases
We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability. Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene. In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four. In a novel navigation and planning task, our agent's performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training. Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent's intentions. The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases. Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence.
null
https://openreview.net/forum?id=HkxaFoC9KQ
https://openreview.net/pdf?id=HkxaFoC9KQ
[ "Relational Reasoning", "Starcraft", "Starcraft II" ]
[]
[]
iKWtjuR001
https://paperswithcode.com/paper/generalized-byzantine-tolerant-sgd
Generalized Byzantine-tolerant SGD
We propose three new robust aggregation rules for distributed synchronous Stochastic Gradient Descent~(SGD) under a general Byzantine failure model. The attackers can arbitrarily manipulate the data transferred between the servers and the workers in the parameter server~(PS) architecture. We prove the Byzantine resilience properties of these aggregation rules. Empirical analysis shows that the proposed techniques outperform current approaches for realistic use cases and Byzantine attack scenarios.
1802.10116
http://arxiv.org/abs/1802.10116v3
http://arxiv.org/pdf/1802.10116v3.pdf
[]
[]
[]
N15l-ypYs9
https://paperswithcode.com/paper/arabic-language-text-classification-using
Arabic Language Text Classification Using Dependency Syntax-Based Feature Selection
We study the performance of Arabic text classification combining various techniques: (a) tfidf vs. dependency syntax, for feature selection and weighting; (b) class association rules vs. support vector machines, for classification. The Arabic text is used in two forms: rootified and lightly stemmed. The results we obtain show that lightly stemmed text leads to better performance than rootified text; that class association rules are better suited for small feature sets obtained by dependency syntax constraints; and, finally, that support vector machines are better suited for large feature sets based on morphological feature selection criteria.
1410.4863
http://arxiv.org/abs/1410.4863v1
http://arxiv.org/pdf/1410.4863v1.pdf
[ "Feature Selection", "Text Classification" ]
[]
[]
BK2oiT5Bp5
https://paperswithcode.com/paper/identifying-short-term-interests-from-mobile
Identifying short-term interests from mobile app adoption pattern
With the increase in an average user's dependence on their mobile devices, the reliance on collecting his browsing history from mobile browsers has also increased. This browsing history is highly utilized in the advertising industry for providing targeted ads in the purview of inferring his short-term interests and pushing relevant ads. However, the major limitation of such an extraction from mobile browsers is that they reset when the browser is closed or when the device is shut down/restarted; thus rendering existing methods to identify the user's short-term interests on mobile devices users, ineffective. In this paper, we propose an alternative method to identify such short-term interests by analysing their mobile app adoption (installation/uninstallation) patterns over a period of time. Such a method can be highly effective in pinpointing the user's ephemeral inclinations like buying/renting an apartment, buying/selling a car or a sudden increased interest in shopping (possibly due to a recent salary bonus, he received). Subsequently, these derived interests are also used for targeted experiments. Our experiments result in up to 93.68% higher click-through rate in comparison to the ads shown without any user-interest knowledge. Also, up to 51% higher revenue in the long term is expected as a result of the application of our proposed algorithm.
1904.11388
http://arxiv.org/abs/1904.11388v1
http://arxiv.org/pdf/1904.11388v1.pdf
[]
[]
[]
ahuqmSCGP_
https://paperswithcode.com/paper/applying-naive-bayes-classification-to-google
Applying Naive Bayes Classification to Google Play Apps Categorization
There are over one million apps on Google Play Store and over half a million publishers. Having such a huge number of apps and developers can pose a challenge to app users and new publishers on the store. Discovering apps can be challenging if apps are not correctly published in the right category, and, in turn, reduce earnings for app developers. Additionally, with over 41 categories on Google Play Store, deciding on the right category to publish an app can be challenging for developers due to the number of categories they have to choose from. Machine Learning has been very useful, especially in classification problems such sentiment analysis, document classification and spam detection. These strategies can also be applied to app categorization on Google Play Store to suggest appropriate categories for app publishers using details from their application. In this project, we built two variations of the Naive Bayes classifier using open metadata from top developer apps on Google Play Store in other to classify new apps on the store. These classifiers are then evaluated using various evaluation methods and their results compared against each other. The results show that the Naive Bayes algorithm performs well for our classification problem and can potentially automate app categorization for Android app publishers on Google Play Store
1608.08574
http://arxiv.org/abs/1608.08574v1
http://arxiv.org/pdf/1608.08574v1.pdf
[ "Document Classification", "Sentiment Analysis" ]
[]
[]
-KZNTKpIN7
https://paperswithcode.com/paper/etymological-wordnet-tracing-the-history-of
Etymological Wordnet: Tracing The History of Words
Research on the history of words has led to remarkable insights about language and also about the history of human civilization more generally. This paper presents the Etymological Wordnet, the first database that aims at making word origin information available as a large, machine-readable network of words in many languages. The information in this resource is obtained from Wiktionary. Extracting a network of etymological information from Wiktionary requires significant effort, as much of the etymological information is only given in prose. We rely on custom pattern matching techniques and mine a large network with over 500,000 word origin links as well as over 2 million derivational/compositional links.
null
https://www.aclweb.org/anthology/L14-1063/
http://www.lrec-conf.org/proceedings/lrec2014/pdf/1083_Paper.pdf
[]
[]
[]
vWvMrCwLG6
https://paperswithcode.com/paper/realizing-half-diminished-reality-from-video
Realizing Half-Diminished Reality from Video Stream of Manipulating Objects
When we watch a video, in which human hands manipulate objects, these hands may obscure some parts of those objects. We are willing to make clear how the objects are manipulated by making the image of hands semi-transparent, and showing the complete images of the hands and the object. By carefully choosing a Half-Diminished Reality method, this paper proposes a method that can process the video in real time and verifies that the proposed method works well.
1709.08340
http://arxiv.org/abs/1709.08340v1
http://arxiv.org/pdf/1709.08340v1.pdf
[]
[]
[]
x6RWq1D_j0
https://paperswithcode.com/paper/learning-with-fredholm-kernels
Learning with Fredholm Kernels
In this paper we propose a framework for supervised and semi-supervised learning based on reformulating the learning problem as a regularized Fredholm integral equation. Our approach fits naturally into the kernel framework and can be interpreted as constructing new data-dependent kernels, which we call Fredholm kernels. We proceed to discuss the noise assumption" for semi-supervised learning and provide evidence evidence both theoretical and experimental that Fredholm kernels can effectively utilize unlabeled data under the noise assumption. We demonstrate that methods based on Fredholm learning show very competitive performance in the standard semi-supervised learning setting."
null
http://papers.nips.cc/paper/5237-learning-with-fredholm-kernels
http://papers.nips.cc/paper/5237-learning-with-fredholm-kernels.pdf
[]
[]
[]
SP5YIDgZa-
https://paperswithcode.com/paper/sequential-neural-methods-for-likelihood-free
Sequential Neural Methods for Likelihood-free Inference
Likelihood-free inference refers to inference when a likelihood function cannot be explicitly evaluated, which is often the case for models based on simulators. Most of the literature is based on sample-based `Approximate Bayesian Computation' methods, but recent work suggests that approaches based on deep neural conditional density estimators can obtain state-of-the-art results with fewer simulations. The neural approaches vary in how they choose which simulations to run and what they learn: an approximate posterior or a surrogate likelihood. This work provides some direct controlled comparisons between these choices.
1811.08723
http://arxiv.org/abs/1811.08723v1
http://arxiv.org/pdf/1811.08723v1.pdf
[]
[]
[]
5HW0dPNFj4
https://paperswithcode.com/paper/asynchronous-advantage-actor-critic-agent-for
Asynchronous Advantage Actor-Critic Agent for Starcraft II
Deep reinforcement learning, and especially the Asynchronous Advantage Actor-Critic algorithm, has been successfully used to achieve super-human performance in a variety of video games. Starcraft II is a new challenge for the reinforcement learning community with the release of pysc2 learning environment proposed by Google Deepmind and Blizzard Entertainment. Despite being a target for several AI developers, few have achieved human level performance. In this project we explain the complexities of this environment and discuss the results from our experiments on the environment. We have compared various architectures and have proved that transfer learning can be an effective paradigm in reinforcement learning research for complex scenarios requiring skill transfer.
1807.08217
http://arxiv.org/abs/1807.08217v1
http://arxiv.org/pdf/1807.08217v1.pdf
[ "Starcraft", "Starcraft II", "Transfer Learning" ]
[]
[]
pljjNOCyjC
https://paperswithcode.com/paper/population-contrastive-divergence-does
Population-Contrastive-Divergence: Does Consistency help with RBM training?
Estimating the log-likelihood gradient with respect to the parameters of a Restricted Boltzmann Machine (RBM) typically requires sampling using Markov Chain Monte Carlo (MCMC) techniques. To save computation time, the Markov chains are only run for a small number of steps, which leads to a biased estimate. This bias can cause RBM training algorithms such as Contrastive Divergence (CD) learning to deteriorate. We adopt the idea behind Population Monte Carlo (PMC) methods to devise a new RBM training algorithm termed Population-Contrastive-Divergence (pop-CD). Compared to CD, it leads to a consistent estimate and may have a significantly lower bias. Its computational overhead is negligible compared to CD. However, the variance of the gradient estimate increases. We experimentally show that pop-CD can significantly outperform CD. In many cases, we observed a smaller bias and achieved higher log-likelihood values. However, when the RBM distribution has many hidden neurons, the consistent estimate of pop-CD may still have a considerable bias and the variance of the gradient estimate requires a smaller learning rate. Thus, despite its superior theoretical properties, it is not advisable to use pop-CD in its current form on large problems.
1510.01624
http://arxiv.org/abs/1510.01624v4
http://arxiv.org/pdf/1510.01624v4.pdf
[]
[]
[]
1x9IQ_84SK
https://paperswithcode.com/paper/quantum-medical-imaging-algorithms
Quantum Medical Imaging Algorithms
A central task in medical imaging is the reconstruction of an image or function from data collected by medical devices (e.g., CT, MRI, and PET scanners). We provide quantum algorithms for image reconstruction that can offer exponential speedup over classical counterparts when data is fed into the algorithm as a quantum state. Since outputs of our algorithms are stored in quantum states, individual pixels of reconstructed images may not be efficiently accessed classically; instead, we discuss various methods to extract information from quantum outputs using a variety of quantum post-processing algorithms.
2004.02036
https://arxiv.org/abs/2004.02036v1
https://arxiv.org/pdf/2004.02036v1.pdf
[ "Image Reconstruction" ]
[]
[]
DzekgShXLd
https://paperswithcode.com/paper/learning-nonparametric-forest-graphical
Learning Nonparametric Forest Graphical Models with Prior Information
We present a framework for incorporating prior information into nonparametric estimation of graphical models. To avoid distributional assumptions, we restrict the graph to be a forest and build on the work of forest density estimation (FDE). We reformulate the FDE approach from a Bayesian perspective, and introduce prior distributions on the graphs. As two concrete examples, we apply this framework to estimating scale-free graphs and learning multiple graphs with similar structures. The resulting algorithms are equivalent to finding a maximum spanning tree of a weighted graph with a penalty term on the connectivity pattern of the graph. We solve the optimization problem via a minorize-maximization procedure with Kruskal's algorithm. Simulations show that the proposed methods outperform competing parametric methods, and are robust to the true data distribution. They also lead to improvement in predictive power and interpretability in two real data sets.
1511.03796
http://arxiv.org/abs/1511.03796v2
http://arxiv.org/pdf/1511.03796v2.pdf
[ "Density Estimation" ]
[]
[]
EzZlwZX_3L
https://paperswithcode.com/paper/relational-reasoning-using-prior-knowledge
Relational Reasoning using Prior Knowledge for Visual Captioning
Exploiting relationships among objects has achieved remarkable progress in interpreting images or videos by natural language. Most existing methods resort to first detecting objects and their relationships, and then generating textual descriptions, which heavily depends on pre-trained detectors and leads to performance drop when facing problems of heavy occlusion, tiny-size objects and long-tail in object detection. In addition, the separate procedure of detecting and captioning results in semantic inconsistency between the pre-defined object/relation categories and the target lexical words. We exploit prior human commonsense knowledge for reasoning relationships between objects without any pre-trained detectors and reaching semantic coherency within one image or video in captioning. The prior knowledge (e.g., in the form of knowledge graph) provides commonsense semantic correlation and constraint between objects that are not explicit in the image and video, serving as useful guidance to build semantic graph for sentence generation. Particularly, we present a joint reasoning method that incorporates 1) commonsense reasoning for embedding image or video regions into semantic space to build semantic graph and 2) relational reasoning for encoding semantic graph to generate sentences. Extensive experiments on the MS-COCO image captioning benchmark and the MSVD video captioning benchmark validate the superiority of our method on leveraging prior commonsense knowledge to enhance relational reasoning for visual captioning.
1906.01290
https://arxiv.org/abs/1906.01290v1
https://arxiv.org/pdf/1906.01290v1.pdf
[ "Image Captioning", "Object Detection", "Relational Reasoning", "Video Captioning" ]
[]
[]
yODSiFvyts
https://paperswithcode.com/paper/bridging-stereo-matching-and-optical-flow-via-1
Bridging Stereo Matching and Optical Flow via Spatiotemporal Correspondence
Stereo matching and flow estimation are two essential tasks for scene understanding, spatially in 3D and temporally in motion. Existing approaches have been focused on the unsupervised setting due to the limited resource to obtain the large-scale ground truth data. To construct a self-learnable objective, co-related tasks are often linked together to form a joint framework. However, the prior work usually utilizes independent networks for each task, thus not allowing to learn shared feature representations across models. In this paper, we propose a single and principled network to jointly learn spatiotemporal correspondence for stereo matching and flow estimation, with a newly designed geometric connection as the unsupervised signal for temporally adjacent stereo pairs. We show that our method performs favorably against several state-of-the-art baselines for both unsupervised depth and flow estimation on the KITTI benchmark dataset.
1905.09265
https://arxiv.org/abs/1905.09265v1
https://arxiv.org/pdf/1905.09265v1.pdf
[ "Optical Flow Estimation", "Scene Understanding", "Stereo Matching", "Stereo Matching Hand" ]
[]
[]
ISAElOuhdv
https://paperswithcode.com/paper/deeplung-3d-deep-convolutional-nets-for
DeepLung: 3D Deep Convolutional Nets for Automated Pulmonary Nodule Detection and Classification
In this work, we present a fully automated lung CT cancer diagnosis system, DeepLung. DeepLung contains two parts, nodule detection and classification. Considering the 3D nature of lung CT data, two 3D networks are designed for the nodule detection and classification respectively. Specifically, a 3D Faster R-CNN is designed for nodule detection with a U-net-like encoder-decoder structure to effectively learn nodule features. For nodule classification, gradient boosting machine (GBM) with 3D dual path network (DPN) features is proposed. The nodule classification subnetwork is validated on a public dataset from LIDC-IDRI, on which it achieves better performance than state-of-the-art approaches, and surpasses the average performance of four experienced doctors. For the DeepLung system, candidate nodules are detected first by the nodule detection subnetwork, and nodule diagnosis is conducted by the classification subnetwork. Extensive experimental results demonstrate the DeepLung is comparable to the experienced doctors both for the nodule-level and patient-level diagnosis on the LIDC-IDRI dataset.
1709.05538
http://arxiv.org/abs/1709.05538v1
http://arxiv.org/pdf/1709.05538v1.pdf
[ "Automated Pulmonary Nodule Detection And Classification" ]
[]
[]
n05ytJvpcU
https://paperswithcode.com/paper/a-random-matrix-perspective-on-mixtures-of-1
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning
One of the distinguishing characteristics of modern deep learning systems is that they typically employ neural network architectures that utilize enormous numbers of parameters, often in the millions and sometimes even in the billions. While this paradigm has inspired significant research on the properties of large networks, relatively little work has been devoted to the fact that these networks are often used to model large complex datasets, which may themselves contain millions or even billions of constraints. In this work, we focus on this high-dimensional regime in which both the dataset size and the number of features tend to infinity. We analyze the performance of a simple regression model trained on the random features $F=f(WX+B)$ for a random weight matrix $W$ and random bias vector $B$, obtaining an exact formula for the asymptotic training error on a noisy autoencoding task. The role of the bias can be understood as parameterizing a distribution over activation functions, and our analysis directly generalizes to such distributions, even those not expressible with a traditional additive bias. Intriguingly, we find that a mixture of nonlinearities can outperform the best single nonlinearity on the noisy autoecndoing task, suggesting that mixtures of nonlinearities might be useful for approximate kernel methods or neural network architecture design.
1912.00827
https://arxiv.org/abs/1912.00827v1
https://arxiv.org/pdf/1912.00827v1.pdf
[]
[]
[]
K9z0USdozM
https://paperswithcode.com/paper/test-positive-at-w-nut-2020-shared-task-3
TEST_POSITIVE at W-NUT 2020 Shared Task-3: Joint Event Multi-task Learning for Slot Filling in Noisy Text
The competition of extracting COVID-19 events from Twitter is to develop systems that can automatically extract related events from tweets. The built system should identify different pre-defined slots for each event, in order to answer important questions (e.g., Who is tested positive? What is the age of the person? Where is he/she?). To tackle these challenges, we propose the Joint Event Multi-task Learning (JOELIN) model. Through a unified global learning framework, we make use of all the training data across different events to learn and fine-tune the language model. Moreover, we implement a type-aware post-processing procedure using named entity recognition (NER) to further filter the predictions. JOELIN outperforms the BERT baseline by 17.2% in micro F1.
2009.14262
https://arxiv.org/abs/2009.14262v1
https://arxiv.org/pdf/2009.14262v1.pdf
[ "Language Modelling", "Multi-Task Learning", "Named Entity Recognition", "Slot Filling" ]
[ "Adam", "Softmax", "GELU", "Dense Connections", "Dropout", "Linear Warmup With Linear Decay", "Layer Normalization", "Attention Dropout", "WordPiece", "Multi-Head Attention", "Weight Decay", "Scaled Dot-Product Attention", "Residual Connection", "BERT" ]
[]
ERfshFUmN6
https://paperswithcode.com/paper/global-variational-method-for-fingerprint
Global Variational Method for Fingerprint Segmentation by Three-part Decomposition
Verifying an identity claim by fingerprint recognition is a commonplace experience for millions of people in their daily life, e.g. for unlocking a tablet computer or smartphone. The first processing step after fingerprint image acquisition is segmentation, i.e. dividing a fingerprint image into a foreground region which contains the relevant features for the comparison algorithm, and a background region. We propose a novel segmentation method by global three-part decomposition (G3PD). Based on global variational analysis, the G3PD method decomposes a fingerprint image into cartoon, texture and noise parts. After decomposition, the foreground region is obtained from the non-zero coefficients in the texture image using morphological processing. The segmentation performance of the G3PD method is compared to five state-of-the-art methods on a benchmark which comprises manually marked ground truth segmentation for 10560 images. Performance evaluations show that the G3PD method consistently outperforms existing methods in terms of segmentation accuracy.
1505.04585
http://arxiv.org/abs/1505.04585v1
http://arxiv.org/pdf/1505.04585v1.pdf
[]
[]
[]
56bYIWugI-
https://paperswithcode.com/paper/representing-multimodal-linguistic-annotated
Representing Multimodal Linguistic Annotated data
The question of interoperability for linguistic annotated resources covers different aspects. First, it requires a representation framework making it possible to compare, and eventually merge, different annotation schema. In this paper, a general description level representing the multimodal linguistic annotations is proposed. It focuses on time representation and on the data content representation: This paper reconsiders and enhances the current and generalized representation of annotations. An XML schema of such annotations is proposed. A Python API is also proposed. This framework is implemented in a multi-platform software and distributed under the terms of the GNU Public License.
null
https://www.aclweb.org/anthology/L14-1422/
http://www.lrec-conf.org/proceedings/lrec2014/pdf/51_Paper.pdf
[]
[]
[]
vmGwNe79OO
https://paperswithcode.com/paper/relations-on-fp-soft-sets-applied-to-decision
Relations on FP-Soft Sets Applied to Decision Making Problems
In this work, we first define relations on the fuzzy parametrized soft sets and study their properties. We also give a decision making method based on these relations. In approximate reasoning, relations on the fuzzy parametrized soft sets have shown to be of a primordial importance. Finally, the method is successfully applied to a problems that contain uncertainties.
1402.3096
http://arxiv.org/abs/1402.3096v1
http://arxiv.org/pdf/1402.3096v1.pdf
[ "Decision Making" ]
[]
[]
GAYlsIdpdS
https://paperswithcode.com/paper/tutorial-making-better-use-of-the-crowd
Tutorial: Making Better Use of the Crowd
Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business. The natural language processing community was early to embrace crowdsourcing as a tool for quickly and inexpensively obtaining annotated data to train NLP systems. Once this data is collected, it can be handed off to algorithms that learn to perform basic NLP tasks such as translation or parsing. Usually this handoff is where interaction with the crowd ends. The crowd provides the data, but the ultimate goal is to eventually take humans out of the loop. Are there better ways to make use of the crowd?In this tutorial, I will begin with a showcase of innovative uses of crowdsourcing that go beyond data collection and annotation. I will discuss applications to natural language processing and machine learning, hybrid intelligence or {``}human in the loop{''} AI systems that leverage the complementary strengths of humans and machines in order to achieve more than either could achieve alone, and large scale studies of human behavior online. I will then spend the majority of the tutorial diving into recent research aimed at understanding who crowdworkers are, how they behave, and what this should teach us about best practices for interacting with the crowd.
null
https://www.aclweb.org/anthology/P17-5006/
https://www.aclweb.org/anthology/P17-5006
[]
[]
[]
GQVCqOf1OX
https://paperswithcode.com/paper/exploiting-the-value-of-the-center-dark
Exploiting the Value of the Center-dark Channel Prior for Salient Object Detection
Saliency detection aims to detect the most attractive objects in images and is widely used as a foundation for various applications. In this paper, we propose a novel salient object detection algorithm for RGB-D images using center-dark channel priors. First, we generate an initial saliency map based on a color saliency map and a depth saliency map of a given RGB-D image. Then, we generate a center-dark channel map based on center saliency and dark channel priors. Finally, we fuse the initial saliency map with the center dark channel map to generate the final saliency map. Extensive evaluations over four benchmark datasets demonstrate that our proposed method performs favorably against most of the state-of-the-art approaches. Besides, we further discuss the application of the proposed algorithm in small target detection and demonstrate the universal value of center-dark channel priors in the field of object detection.
1805.05132
http://arxiv.org/abs/1805.05132v1
http://arxiv.org/pdf/1805.05132v1.pdf
[ "Object Detection", "RGB Salient Object Detection", "Saliency Detection" ]
[]
[]
LBbtv5WDiw
https://paperswithcode.com/paper/time-adaptive-reinforcement-learning
Time Adaptive Reinforcement Learning
Reinforcement learning (RL) allows to solve complex tasks such as Go often with a stronger performance than humans. However, the learned behaviors are usually fixed to specific tasks and unable to adapt to different contexts. Here we consider the case of adapting RL agents to different time restrictions, such as finishing a task with a given time limit that might change from one task execution to the next. We define such problems as Time Adaptive Markov Decision Processes and introduce two model-free, value-based algorithms: the Independent Gamma-Ensemble and the n-Step Ensemble. In difference to classical approaches, they allow a zero-shot adaptation between different time restrictions. The proposed approaches represent general mechanisms to handle time adaptive tasks making them compatible with many existing RL methods, algorithms, and scenarios.
2004.08600
https://arxiv.org/abs/2004.08600v1
https://arxiv.org/pdf/2004.08600v1.pdf
[]
[]
[]
0w2Dl64Guc
https://paperswithcode.com/paper/content-based-image-retrieval-based-on-late
Content-Based Image Retrieval Based on Late Fusion of Binary and Local Descriptors
One of the challenges in Content-Based Image Retrieval (CBIR) is to reduce the semantic gaps between low-level features and high-level semantic concepts. In CBIR, the images are represented in the feature space and the performance of CBIR depends on the type of selected feature representation. Late fusion also known as visual words integration is applied to enhance the performance of image retrieval. The recent advances in image retrieval diverted the focus of research towards the use of binary descriptors as they are reported computationally efficient. In this paper, we aim to investigate the late fusion of Fast Retina Keypoint (FREAK) and Scale Invariant Feature Transform (SIFT). The late fusion of binary and local descriptor is selected because among binary descriptors, FREAK has shown good results in classification-based problems while SIFT is robust to translation, scaling, rotation and small distortions. The late fusion of FREAK and SIFT integrates the performance of both feature descriptors for an effective image retrieval. Experimental results and comparisons show that the proposed late fusion enhances the performances of image retrieval.
1703.08492
http://arxiv.org/abs/1703.08492v1
http://arxiv.org/pdf/1703.08492v1.pdf
[ "Content-Based Image Retrieval", "Image Retrieval" ]
[]
[]
CkChEzkhF0
https://paperswithcode.com/paper/bioalbert-a-simple-and-effective-pre-trained
BioALBERT: A Simple and Effective Pre-trained Language Model for Biomedical Named Entity Recognition
In recent years, with the growing amount of biomedical documents, coupled with advancement in natural language processing algorithms, the research on biomedical named entity recognition (BioNER) has increased exponentially. However, BioNER research is challenging as NER in the biomedical domain are: (i) often restricted due to limited amount of training data, (ii) an entity can refer to multiple types and concepts depending on its context and, (iii) heavy reliance on acronyms that are sub-domain specific. Existing BioNER approaches often neglect these issues and directly adopt the state-of-the-art (SOTA) models trained in general corpora which often yields unsatisfactory results. We propose biomedical ALBERT (A Lite Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) bioALBERT, an effective domain-specific language model trained on large-scale biomedical corpora designed to capture biomedical context-dependent NER. We adopted a self-supervised loss used in ALBERT that focuses on modelling inter-sentence coherence to better learn context-dependent representations and incorporated parameter reduction techniques to lower memory consumption and increase the training speed in BioNER. In our experiments, BioALBERT outperformed comparative SOTA BioNER models on eight biomedical NER benchmark datasets with four different entity types. We trained four different variants of BioALBERT models which are available for the research community to be used in future research.
2009.09223
https://arxiv.org/abs/2009.09223v1
https://arxiv.org/pdf/2009.09223v1.pdf
[ "Language Modelling", "Named Entity Recognition" ]
[ "Adam", "GELU", "Dense Connections", "Layer Normalization", "WordPiece", "Multi-Head Attention", "LAMB", "Scaled Dot-Product Attention", "Residual Connection", "Softmax", "ALBERT" ]
[]
k-tIbeJ4ct
https://paperswithcode.com/paper/amd-severity-prediction-and-explainability
AMD Severity Prediction And Explainability Using Image Registration And Deep Embedded Clustering
We propose a method to predict severity of age related macular degeneration (AMD) from input optical coherence tomography (OCT) images. Although there is no standard clinical severity scale for AMD, we leverage deep learning (DL) based image registration and clustering methods to identify diseased cases and predict their severity. Experiments demonstrate our approach's disease classification performance matches state of the art methods. The predicted disease severity performs well on previously unseen data. Registration output provides better explainability than class activation maps regarding label and severity decisions
1907.03075
https://arxiv.org/abs/1907.03075v1
https://arxiv.org/pdf/1907.03075v1.pdf
[ "Image Registration" ]
[]
[]
prKUUVXhKd
https://paperswithcode.com/paper/rapid-online-analysis-of-local-feature
Rapid Online Analysis of Local Feature Detectors and Their Complementarity
A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications.
1510.05145
http://arxiv.org/abs/1510.05145v1
http://arxiv.org/pdf/1510.05145v1.pdf
[ "Hypothesis Testing" ]
[]
[]
JBAiPjMt7I
https://paperswithcode.com/paper/automated-model-selection-with-bayesian
Automated Model Selection with Bayesian Quadrature
We present a novel technique for tailoring Bayesian quadrature (BQ) to model selection. The state-of-the-art for comparing the evidence of multiple models relies on Monte Carlo methods, which converge slowly and are unreliable for computationally expensive models. Previous research has shown that BQ offers sample efficiency superior to Monte Carlo in computing the evidence of an individual model. However, applying BQ directly to model comparison may waste computation producing an overly-accurate estimate for the evidence of a clearly poor model. We propose an automated and efficient algorithm for computing the most-relevant quantity for model selection: the posterior probability of a model. Our technique maximizes the mutual information between this quantity and observations of the models' likelihoods, yielding efficient acquisition of samples across disparate model spaces when likelihood observations are limited. Our method produces more-accurate model posterior estimates using fewer model likelihood evaluations than standard Bayesian quadrature and Monte Carlo estimators, as we demonstrate on synthetic and real-world examples.
1902.09724
http://arxiv.org/abs/1902.09724v3
http://arxiv.org/pdf/1902.09724v3.pdf
[ "Model Selection" ]
[]
[]
RbcNs_t8T4
https://paperswithcode.com/paper/recurrent-and-spiking-modeling-of-sparse
Recurrent and Spiking Modeling of Sparse Surgical Kinematics
Robot-assisted minimally invasive surgery is improving surgeon performance and patient outcomes. This innovation is also turning what has been a subjective practice into motion sequences that can be precisely measured. A growing number of studies have used machine learning to analyze video and kinematic data captured from surgical robots. In these studies, models are typically trained on benchmark datasets for representative surgical tasks to assess surgeon skill levels. While they have shown that novices and experts can be accurately classified, it is not clear whether machine learning can separate highly proficient surgeons from one another, especially without video data. In this study, we explore the possibility of using only kinematic data to predict surgeons of similar skill levels. We focus on a new dataset created from surgical exercises on a simulation device for skill training. A simple, efficient encoding scheme was devised to encode kinematic sequences so that they were amenable to edge learning. We report that it is possible to identify surgical fellows receiving near perfect scores in the simulation exercises based on their motion characteristics alone. Further, our model could be converted to a spiking neural network to train and infer on the Nengo simulation framework with no loss in accuracy. Overall, this study suggests that building neuromorphic models from sparse motion features may be a potentially useful strategy for identifying surgeons and gestures with chips deployed on robotic systems to offer adaptive assistance during surgery and training with additional latency and privacy benefits.
2005.05868
https://arxiv.org/abs/2005.05868v2
https://arxiv.org/pdf/2005.05868v2.pdf
[]
[]
[]
BDgTX7eDt4
https://paperswithcode.com/paper/reward-rational-implicit-choice-a-unifying
Reward-rational (implicit) choice: A unifying formalism for reward learning
It is often difficult to hand-specify what the correct reward function is for a task, so researchers have instead aimed to learn reward functions from human behavior or feedback. The types of behavior interpreted as evidence of the reward function have expanded greatly in recent years. We've gone from demonstrations, to comparisons, to reading into the information leaked when the human is pushing the robot away or turning it off. And surely, there is more to come. How will a robot make sense of all these diverse types of behavior? Our key insight is that different types of behavior can be interpreted in a single unifying formalism - as a reward-rational choice that the human is making, often implicitly. The formalism offers both a unifying lens with which to view past work, as well as a recipe for interpreting new sources of information that are yet to be uncovered. We provide two examples to showcase this: interpreting a new feedback type, and reading into how the choice of feedback itself leaks information about the reward.
2002.04833
https://arxiv.org/abs/2002.04833v3
https://arxiv.org/pdf/2002.04833v3.pdf
[]
[]
[]
6wsVeATLFw
https://paperswithcode.com/paper/improving-social-media-text-summarization-by
Improving Social Media Text Summarization by Learning Sentence Weight Distribution
Recently, encoder-decoder models are widely used in social media text summarization. However, these models sometimes select noise words in irrelevant sentences as part of a summary by error, thus declining the performance. In order to inhibit irrelevant sentences and focus on key information, we propose an effective approach by learning sentence weight distribution. In our model, we build a multi-layer perceptron to predict sentence weights. During training, we use the ROUGE score as an alternative to the estimated sentence weight, and try to minimize the gap between estimated weights and predicted weights. In this way, we encourage our model to focus on the key sentences, which have high relevance with the summary. Experimental results show that our approach outperforms baselines on a large-scale social media corpus.
1710.11332
http://arxiv.org/abs/1710.11332v1
http://arxiv.org/pdf/1710.11332v1.pdf
[ "Text Summarization" ]
[]
[]
KASonwd5a5
https://paperswithcode.com/paper/an-industrial-case-study-on-shrinking-code
An Industrial Case Study on Shrinking Code Review Changesets through Remark Prediction
Change-based code review is used widely in industrial software development. Thus, research on tools that help the reviewer to achieve better review performance can have a high impact. We analyze one possibility to provide cognitive support for the reviewer: Determining the importance of change parts for review, specifically determining which parts of the code change can be left out from the review without harm. To determine the importance of change parts, we extract data from software repositories and build prediction models for review remarks based on this data. The approach is discussed in detail. To gather the input data, we propose a novel algorithm to trace review remarks to their triggers. We apply our approach in a medium-sized software company. In this company, we can avoid the review of 25% of the change parts and of 23% of the changed Java source code lines, while missing only about 1% of the review remarks. Still, we also observe severe limitations of the tried approach: Much of the savings are due to simple syntactic rules, noise in the data hampers the search for better prediction models, and some developers in the case company oppose the taken approach. Besides the main results on the mining and prediction of triggers for review remarks, we contribute experiences with a novel, multi-objective and interactive rule mining approach. The anonymized dataset from the company is made available, as are the implementations for the devised algorithms.
1812.09510
http://arxiv.org/abs/1812.09510v1
http://arxiv.org/pdf/1812.09510v1.pdf
[]
[]
[]
BnPqXRv_d5
https://paperswithcode.com/paper/improving-sequence-to-sequence-learning-via
Improving Sequence-to-Sequence Learning via Optimal Transport
Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE). However, standard MLE training considers a word-level objective, predicting the next word given the previous ground-truth partial sentence. This procedure focuses on modeling local syntactic patterns, and may fail to capture long-range semantic structure. We present a novel solution to alleviate these issues. Our approach imposes global sequence-level guidance via new supervision based on optimal transport, enabling the overall characterization and preservation of semantic features. We further show that this method can be understood as a Wasserstein gradient flow trying to match our model to the ground truth sequence distribution. Extensive experiments are conducted to validate the utility of the proposed approach, showing consistent improvements over a wide variety of NLP tasks, including machine translation, abstractive text summarization, and image captioning.
1901.06283
http://arxiv.org/abs/1901.06283v1
http://arxiv.org/pdf/1901.06283v1.pdf
[ "Abstractive Text Summarization", "Image Captioning", "Machine Translation", "Text Summarization" ]
[]
[]
r-3xhWbO3c
https://paperswithcode.com/paper/on-the-tractability-of-minimal-model
On the Tractability of Minimal Model Computation for Some CNF Theories
Designing algorithms capable of efficiently constructing minimal models of CNFs is an important task in AI. This paper provides new results along this research line and presents new algorithms for performing minimal model finding and checking over positive propositional CNFs and model minimization over propositional CNFs. An algorithmic schema, called the Generalized Elimination Algorithm (GEA) is presented, that computes a minimal model of any positive CNF. The schema generalizes the Elimination Algorithm (EA) [BP97], which computes a minimal model of positive head-cycle-free (HCF) CNF theories. While the EA always runs in polynomial time in the size of the input HCF CNF, the complexity of the GEA depends on the complexity of the specific eliminating operator invoked therein, which may in general turn out to be exponential. Therefore, a specific eliminating operator is defined by which the GEA computes, in polynomial time, a minimal model for a class of CNF that strictly includes head-elementary-set-free (HEF) CNF theories [GLL06], which form, in their turn, a strict superset of HCF theories. Furthermore, in order to deal with the high complexity associated with recognizing HEF theories, an "incomplete" variant of the GEA (called IGEA) is proposed: the resulting schema, once instantiated with an appropriate elimination operator, always constructs a model of the input CNF, which is guaranteed to be minimal if the input theory is HEF. In the light of the above results, the main contribution of this work is the enlargement of the tractability frontier for the minimal model finding and checking and the model minimization problems.
1310.8120
http://arxiv.org/abs/1310.8120v1
http://arxiv.org/pdf/1310.8120v1.pdf
[]
[]
[]
6ddjquarg7
https://paperswithcode.com/paper/two-step-joint-model-for-drug-drug
Two Step Joint Model for Drug Drug Interaction Extraction
When patients need to take medicine, particularly taking more than one kind of drug simultaneously, they should be alarmed that there possibly exists drug-drug interaction. Interaction between drugs may have a negative impact on patients or even cause death. Generally, drugs that conflict with a specific drug (or label drug) are usually described in its drug label or package insert. Since more and more new drug products come into the market, it is difficult to collect such information by manual. We take part in the Drug-Drug Interaction (DDI) Extraction from Drug Labels challenge of Text Analysis Conference (TAC) 2018, choosing task1 and task2 to automatically extract DDI related mentions and DDI relations respectively. Instead of regarding task1 as named entity recognition (NER) task and regarding task2 as relation extraction (RE) task then solving it in a pipeline, we propose a two step joint model to detect DDI and it's related mentions jointly. A sequence tagging system (CNN-GRU encoder-decoder) finds precipitants first and search its fine-grained Trigger and determine the DDI for each precipitant in the second step. Moreover, a rule based model is built to determine the sub-type for pharmacokinetic interation. Our system achieved best result in both task1 and task2. F-measure reaches 0.46 in task1 and 0.40 in task2.
2008.12704
https://arxiv.org/abs/2008.12704v1
https://arxiv.org/pdf/2008.12704v1.pdf
[ "Drug–drug interaction extraction", "Named Entity Recognition", "Relation Extraction" ]
[]
[]
f23tD4_KgB
https://paperswithcode.com/paper/adversarial-item-promotion-vulnerabilities-at
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
E-commerce platforms provide their customers with ranked lists of recommended items matching the customers' preferences. Merchants on e-commerce platforms would like their items to appear as high as possible in the top-N of these ranked lists. In this paper, we demonstrate how unscrupulous merchants can create item images that artificially promote their products, improving their rankings. Recommender systems that use images to address the cold start problem are vulnerable to this security risk. We describe a new type of attack, Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N recommenders: the ranking mechanism itself. Existing work on adversarial images in recommender systems investigates the implications of conventional attacks, which target deep learning classifiers. In contrast, our AIP attacks are embedding attacks that seek to push features representations in a way that fools the ranker (not a classifier) and directly lead to item promotion. We introduce three AIP attacks insider attack, expert attack, and semantic attack, which are defined with respect to three successively more realistic attack models. Our experiments evaluate the danger of these attacks when mounted against three representative visually-aware recommender algorithms in a framework that uses images to address cold start. We also evaluate potential defenses, including adversarial training and find that common, currently-existing, techniques do not eliminate the danger of AIP attacks. In sum, we show that using images to address cold start opens recommender systems to potential threats with clear practical implications.
2006.01888
https://arxiv.org/abs/2006.01888v3
https://arxiv.org/pdf/2006.01888v3.pdf
[ "Recommendation Systems" ]
[]
[]
N2zKEm9Cdp
https://paperswithcode.com/paper/classification-of-quantitative-light-induced
Classification of Quantitative Light-Induced Fluorescence Images Using Convolutional Neural Network
Images are an important data source for diagnosis and treatment of oral diseases. The manual classification of images may lead to misdiagnosis or mistreatment due to subjective errors. In this paper an image classification model based on Convolutional Neural Network is applied to Quantitative Light-induced Fluorescence images. The deep neural network outperforms other state of the art shallow classification models in predicting labels derived from three different dental plaque assessment scores. The model directly benefits from multi-channel representation of the images resulting in improved performance when, besides the Red colour channel, additional Green and Blue colour channels are used.
1705.09193
http://arxiv.org/abs/1705.09193v1
http://arxiv.org/pdf/1705.09193v1.pdf
[ "Image Classification" ]
[]
[]
BJcBusi3tp
https://paperswithcode.com/paper/uqam-ntl-named-entity-recognition-in-twitter
UQAM-NTL: Named entity recognition in Twitter messages
This paper describes our system used in the 2nd Workshop on Noisy User-generated Text (WNUT) shared task for Named Entity Recognition (NER) in Twitter, in conjunction with Coling 2016. Our system is based on supervised machine learning by applying Conditional Random Fields (CRF) to train two classifiers for two evaluations. The first evaluation aims at predicting the 10 fine-grained types of named entities; while the second evaluation aims at predicting no type of named entities. The experimental results show that our method has significantly improved Twitter NER performance.
null
https://www.aclweb.org/anthology/W16-3926/
https://www.aclweb.org/anthology/W16-3926
[ "Language Modelling", "Named Entity Recognition" ]
[]
[]
YGtc7qEi9G
https://paperswithcode.com/paper/bilevel-continual-learning
Bilevel Continual Learning
Continual learning aims to learn continuously from a stream of tasks and data in an online-learning fashion, being capable of exploiting what was learned previously to improve current and future tasks while still being able to perform well on the previous tasks. One common limitation of many existing continual learning methods is that they often train a model directly on all available training data without validation due to the nature of continual learning, thus suffering poor generalization at test time. In this work, we present a novel framework of continual learning named "Bilevel Continual Learning" (BCL) by unifying a {\it bilevel optimization} objective and a {\it dual memory management} strategy comprising both episodic memory and generalization memory to achieve effective knowledge transfer to future tasks and alleviate catastrophic forgetting on old tasks simultaneously. Our extensive experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods. Our implementation is available at https://github.com/phquang/bilevel-continual-learning.
2007.15553
https://arxiv.org/abs/2007.15553v1
https://arxiv.org/pdf/2007.15553v1.pdf
[ "bilevel optimization", "Continual Learning", "Transfer Learning" ]
[]
[]
N-KMahja33
https://paperswithcode.com/paper/table-to-text-describing-table-region-with
Table-to-Text: Describing Table Region with Natural Language
In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.
1805.11234
http://arxiv.org/abs/1805.11234v1
http://arxiv.org/pdf/1805.11234v1.pdf
[ "Language Modelling" ]
[]
[]
lqhD90uhjV
https://paperswithcode.com/paper/190910304
Where to Look Next: Unsupervised Active Visual Exploration on 360° Input
We address the problem of active visual exploration of large 360{\deg} inputs. In our setting an active agent with a limited camera bandwidth explores its 360{\deg} environment by changing its viewing direction at limited discrete time steps. As such, it observes the world as a sequence of narrow field-of-view 'glimpses', deciding for itself where to look next. Our proposed method exceeds previous works' performance by a significant margin without the need for deep reinforcement learning or training separate networks as sidekicks. A key component of our system are the spatial memory maps that make the system aware of the glimpses' orientations (locations in the 360{\deg} image). Further, we stress the advantages of retina-like glimpses when the agent's sensor bandwidth and time-steps are limited. Finally, we use our trained model to do classification of the whole scene using only the information observed in the glimpses.
1909.10304
https://arxiv.org/abs/1909.10304v2
https://arxiv.org/pdf/1909.10304v2.pdf
[]
[]
[]
8FWasyUHdy
https://paperswithcode.com/paper/multi-view-constraint-propagation-with
Multi-View Constraint Propagation with Consensus Prior Knowledge
In many applications, the pairwise constraint is a kind of weaker supervisory information which can be collected easily. The constraint propagation has been proved to be a success of exploiting such side-information. In recent years, some methods of multi-view constraint propagation have been proposed. However, the problem of reasonably fusing different views remains unaddressed. In this paper, we present a method dubbed Consensus Prior Constraint Propagation (CPCP), which can provide the prior knowledge of the robustness of each data instance and its neighborhood. With the robustness generated from the consensus information of each view, we build a unified affinity matrix as a result of the propagation. Specifically, we fuse the affinity of different views at a data instance level instead of a view level. This paper also introduces an approach to deal with the imbalance between the positive and negative constraints. The proposed method has been tested in clustering tasks on two publicly available multi-view data sets to show the superior performance.
1609.06456
http://arxiv.org/abs/1609.06456v1
http://arxiv.org/pdf/1609.06456v1.pdf
[]
[]
[]
1WUiuyCy5J
https://paperswithcode.com/paper/triad-state-space-construction-for-chaotic
Triad State Space Construction for Chaotic Signal Classification with Deep Learning
Inspired by the well-known permutation entropy (PE), an effective image encoding scheme for chaotic time series, Triad State Space Construction (TSSC), is proposed. The TSSC image can recognize higher-order temporal patterns and identify new forbidden regions in time series motifs beyond the Bandt-Pompe probabilities. The Convolutional Neural Network (ConvNet) is widely used in image classification. The ConvNet classifier based on TSSC images (TSSC-ConvNet) are highly accurate and very robust in the chaotic signal classification.
2003.11931
https://arxiv.org/abs/2003.11931v1
https://arxiv.org/pdf/2003.11931v1.pdf
[ "Image Classification", "Time Series" ]
[]
[]
dKPpkj0mb7
https://paperswithcode.com/paper/differentially-private-assouad-fano-and-le
Differentially Private Assouad, Fano, and Le Cam
Le Cam's method, Fano's inequality, and Assouad's lemma are three widely used techniques to prove lower bounds for statistical estimation tasks. We propose their analogues under central differential privacy. Our results are simple, easy to apply and we use them to establish sample complexity bounds in several estimation tasks. We establish the optimal sample complexity of discrete distribution estimation under total variation distance and $\ell_2$ distance. We also provide lower bounds for several other distribution classes, including product distributions and Gaussian mixtures that are tight up to logarithmic factors. The technical component of our paper relates coupling between distributions to the sample complexity of estimation under differential privacy.
2004.06830
https://arxiv.org/abs/2004.06830v2
https://arxiv.org/pdf/2004.06830v2.pdf
[]
[]
[]
WN2RbXMGNz
https://paperswithcode.com/paper/temporal-graph-kernels-for-classifying
Temporal Graph Kernels for Classifying Dissemination Processes
Many real-world graphs or networks are temporal, e.g., in a social network persons only interact at specific points in time. This information directs dissemination processes on the network, such as the spread of rumors, fake news, or diseases. However, the current state-of-the-art methods for supervised graph classification are designed mainly for static graphs and may not be able to capture temporal information. Hence, they are not powerful enough to distinguish between graphs modeling different dissemination processes. To address this, we introduce a framework to lift standard graph kernels to the temporal domain. Specifically, we explore three different approaches and investigate the trade-offs between loss of temporal information and efficiency. Moreover, to handle large-scale graphs, we propose stochastic variants of our kernels with provable approximation guarantees. We evaluate our methods on a wide range of real-world social networks. Our methods beat static kernels by a large margin in terms of accuracy while still being scalable to large graphs and data sets. Hence, we confirm that taking temporal information into account is crucial for the successful classification of dissemination processes.
1911.05496
https://arxiv.org/abs/1911.05496v1
https://arxiv.org/pdf/1911.05496v1.pdf
[ "Graph Classification" ]
[]
[]
OJRVkVT6ZA
https://paperswithcode.com/paper/stream-packing-for-asynchronous-multi-context
Stream Packing for Asynchronous Multi-Context Systems using ASP
When a processing unit relies on data from external streams, we may face the problem that the stream data needs to be rearranged in a way that allows the unit to perform its task(s). On arrival of new data, we must decide whether there is sufficient information available to start processing or whether to wait for more data. Furthermore, we need to ensure that the data meets the input specification of the processing step. In the case of multiple input streams it is also necessary to coordinate which data from which incoming stream should form the input of the next process instantiation. In this work, we propose a declarative approach as an interface between multiple streams and a processing unit. The idea is to specify via answer-set programming how to arrange incoming data in packages that are suitable as input for subsequent processing. Our approach is intended for use in asynchronous multi-context systems (aMCSs), a recently proposed framework for loose coupling of knowledge representation formalisms that allows for online reasoning in a dynamic environment. Contexts in aMCSs process data streams from external sources and other contexts.
1611.05640
http://arxiv.org/abs/1611.05640v1
http://arxiv.org/pdf/1611.05640v1.pdf
[]
[]
[]
UB037upbpb
https://paperswithcode.com/paper/resolvable-designs-for-speeding-up
Resolvable Designs for Speeding up Distributed Computing
Distributed computing frameworks such as MapReduce are often used to process large computational jobs. They operate by partitioning each job into smaller tasks executed on different servers. The servers also need to exchange intermediate values to complete the computation. Experimental evidence suggests that this so-called Shuffle phase can be a significant part of the overall execution time for several classes of jobs. Prior work has demonstrated a natural tradeoff between computation and communication whereby running redundant copies of jobs can reduce the Shuffle traffic load, thereby leading to reduced overall execution times. For a single job, the main drawback of this approach is that it requires the original job to be split into a number of files that grows exponentially in the system parameters. When extended to multiple jobs (with specific function types), these techniques suffer from a limitation of a similar flavor, i.e., they require an exponentially large number of jobs to be executed. In practical scenarios, these requirements can significantly reduce the promised gains of the method. In this work, we show that a class of combinatorial structures called resolvable designs can be used to develop efficient coded distributed computing schemes for both the single and multiple job scenarios considered in prior work. We present both theoretical analysis and exhaustive experimental results (on Amazon EC2 clusters) that demonstrate the performance advantages of our method. For the single and multiple job cases, we obtain speed-ups of 4.69x (and 2.6x over prior work) and 4.31x over the baseline approach, respectively.
1908.05666
https://arxiv.org/abs/1908.05666v3
https://arxiv.org/pdf/1908.05666v3.pdf
[ "Distributed Computing" ]
[]
[]
V6ABnNWwVx
https://paperswithcode.com/paper/image-co-localization-by-mimicking-a-good
Image Co-localization by Mimicking a Good Detector's Confidence Score Distribution
Given a set of images containing objects from the same category, the task of image co-localization is to identify and localize each instance. This paper shows that this problem can be solved by a simple but intriguing idea, that is, a common object detector can be learnt by making its detection confidence scores distributed like those of a strongly supervised detector. More specifically, we observe that given a set of object proposals extracted from an image that contains the object of interest, an accurate strongly supervised object detector should give high scores to only a small minority of proposals, and low scores to most of them. Thus, we devise an entropy-based objective function to enforce the above property when learning the common object detector. Once the detector is learnt, we resort to a segmentation approach to refine the localization. We show that despite its simplicity, our approach outperforms state-of-the-art methods.
1603.04619
http://arxiv.org/abs/1603.04619v2
http://arxiv.org/pdf/1603.04619v2.pdf
[]
[]
[]
ww4Lw_RhGg
https://paperswithcode.com/paper/detecting-british-columbia-coastal-rainfall
Detecting British Columbia Coastal Rainfall Patterns by Clustering Gaussian Processes
Functional data analysis is a statistical framework where data are assumed to follow some functional form. This method of analysis is commonly applied to time series data, where time, measured continuously or in discrete intervals, serves as the location for a function's value. Gaussian processes are a generalization of the multivariate normal distribution to function space and, in this paper, they are used to shed light on coastal rainfall patterns in British Columbia (BC). Specifically, this work addressed the question over how one should carry out an exploratory cluster analysis for the BC, or any similar, coastal rainfall data. An approach is developed for clustering multiple processes observed on a comparable interval, based on how similar their underlying covariance kernel is. This approach provides interesting insights into the BC data, and these insights can be framed in terms of El Ni\~{n}o and La Ni\~{n}a; however, the result is not simply one cluster representing El Ni\~{n}o years and another for La Ni\~{n}a years. From one perspective, the results show that clustering annual rainfall can potentially be used to identify extreme weather patterns.
1812.09758
https://arxiv.org/abs/1812.09758v2
https://arxiv.org/pdf/1812.09758v2.pdf
[ "Gaussian Processes", "Time Series" ]
[]
[]
_T4316_adn
https://paperswithcode.com/paper/bilinear-parameterization-for-differentiable
Bilinear Parameterization For Differentiable Rank-Regularization
Low rank approximation is a commonly occurring problem in many computer vision and machine learning applications. There are two common ways of optimizing the resulting models. Either the set of matrices with a given rank can be explicitly parametrized using a bilinear factorization, or low rank can be implicitly enforced using regularization terms penalizing non-zero singular values. While the former approach results in differentiable problems that can be efficiently optimized using local quadratic approximation, the latter is typically not differentiable (sometimes even discontinuous) and requires first order subgradient or splitting methods. It is well known that gradient based methods exhibit slow convergence for ill-conditioned problems. In this paper we show how many non-differentiable regularization methods can be reformulated into smooth objectives using bilinear parameterization. This allows us to use standard second order methods, such as Levenberg--Marquardt (LM) and Variable Projection (VarPro), to achieve accurate solutions for ill-conditioned cases. We show on several real and synthetic experiments that our second order formulation converges to substantially more accurate solutions than competing state-of-the-art methods.
1811.11088
https://arxiv.org/abs/1811.11088v3
https://arxiv.org/pdf/1811.11088v3.pdf
[]
[]
[]
mGr5uVktUs
https://paperswithcode.com/paper/perturbed-masking-parameter-free-probing-for
Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT
By introducing a small set of additional parameters, a probe learns to solve specific linguistic tasks (e.g., dependency parsing) in a supervised manner using feature representations (e.g., contextualized embeddings). The effectiveness of such probing tasks is taken as evidence that the pre-trained model encodes linguistic knowledge. However, this approach of evaluating a language model is undermined by the uncertainty of the amount of knowledge that is learned by the probe itself. Complementary to those works, we propose a parameter-free probing technique for analyzing pre-trained language models (e.g., BERT). Our method does not require direct supervision from the probing tasks, nor do we introduce additional parameters to the probing process. Our experiments on BERT show that syntactic trees recovered from BERT using our method are significantly better than linguistically-uninformed baselines. We further feed the empirically induced dependency structures into a downstream sentiment classification task and find its improvement compatible with or even superior to a human-designed dependency schema.
2004.14786
https://arxiv.org/abs/2004.14786v2
https://arxiv.org/pdf/2004.14786v2.pdf
[ "Dependency Parsing", "Language Modelling", "Sentiment Analysis" ]
[ "Residual Connection", "Attention Dropout", "Linear Warmup With Linear Decay", "Weight Decay", "GELU", "Dense Connections", "Adam", "WordPiece", "Softmax", "Dropout", "Multi-Head Attention", "Layer Normalization", "Scaled Dot-Product Attention", "BERT" ]
[]
h5TnCNH_MZ
https://paperswithcode.com/paper/uaic-at-semeval-2019-task-3-extracting-much
UAIC at SemEval-2019 Task 3: Extracting Much from Little
In this paper, we present a system description for implementing a sentiment analysis agent capable of interpreting the state of an interlocutor engaged in short three message conversations. We present the results and observations of our work and which parts could be further improved in the future.
null
https://www.aclweb.org/anthology/S19-2062/
https://www.aclweb.org/anthology/S19-2062
[ "Sentiment Analysis" ]
[]
[]
v4S2mjOaG7
https://paperswithcode.com/paper/automated-quantification-of-ct-patterns
Machine Learning Automatically Detects COVID-19 using Chest CTs in a Large Multicenter Cohort
Objectives: To investigate machine-learning classifiers and interpretable models using chest CT for detection of COVID-19 and differentiation from other pneumonias, ILD and normal CTs. Methods: Our retrospective multi-institutional study obtained 2096 chest CTs from 16 institutions (including 1077 COVID-19 patients). Training/testing cohorts included 927/100 COVID-19, 388/33 ILD, 189/33 other pneumonias, and 559/34 normal (no pathologies) CTs. A metric-based approach for classification of COVID-19 used interpretable features, relying on logistic regression and random forests. A deep learning-based classifier differentiated COVID-19 via 3D features extracted directly from CT attenuation and probability distribution of airspace opacities. Results: Most discriminative features of COVID-19 are percentage of airspace opacity and peripheral and basal predominant opacities, concordant with the typical characterization of COVID-19 in the literature. Unsupervised hierarchical clustering compares feature distribution across COVID-19 and control cohorts. The metrics-based classifier achieved AUC=0.83, sensitivity=0.74, and specificity=0.79 of versus respectively 0.93, 0.90, and 0.83 for the DL-based classifier. Most of ambiguity comes from non-COVID-19 pneumonia with manifestations that overlap with COVID-19, as well as mild COVID-19 cases. Non-COVID-19 classification performance is 91% for ILD, 64% for other pneumonias and 94% for no pathologies, which demonstrates the robustness of our method against different compositions of control groups. Conclusions: Our new method accurately discriminates COVID-19 from other types of pneumonia, ILD, and no pathologies CTs, using quantitative imaging features derived from chest CT, while balancing interpretability of results and classification performance, and therefore may be useful to facilitate diagnosis of COVID-19.
2006.04998
https://arxiv.org/abs/2006.04998v3
https://arxiv.org/pdf/2006.04998v3.pdf
[]
[ "Logistic Regression" ]
[]
3jfxctTGse
https://paperswithcode.com/paper/deepasl-kinetic-model-incorporated-loss-for
DeepASL: Kinetic Model Incorporated Loss for Denoising Arterial Spin Labeled MRI via Deep Residual Learning
Arterial spin labeling (ASL) allows to quantify the cerebral blood flow (CBF) by magnetic labeling of the arterial blood water. ASL is increasingly used in clinical studies due to its noninvasiveness, repeatability and benefits in quantification. However, ASL suffers from an inherently low-signal-to-noise ratio (SNR) requiring repeated measurements of control/spin-labeled (C/L) pairs to achieve a reasonable image quality, which in return increases motion sensitivity. This leads to clinically prolonged scanning times increasing the risk of motion artifacts. Thus, there is an immense need of advanced imaging and processing techniques in ASL. In this paper, we propose a novel deep learning based approach to improve the perfusion-weighted image quality obtained from a subset of all available pairwise C/L subtractions. Specifically, we train a deep fully convolutional network (FCN) to learn a mapping from noisy perfusion-weighted image and its subtraction (residual) from the clean image. Additionally, we incorporate the CBF estimation model in the loss function during training, which enables the network to produce high quality images while simultaneously enforcing the CBF estimates to be as close as reference CBF values. Extensive experiments on synthetic and clinical ASL datasets demonstrate the effectiveness of our method in terms of improved ASL image quality, accurate CBF parameter estimation and considerably small computation time during testing.
1804.02755
http://arxiv.org/abs/1804.02755v2
http://arxiv.org/pdf/1804.02755v2.pdf
[ "Denoising" ]
[]
[]
lUrE3YPVVU
https://paperswithcode.com/paper/slot-gated-modeling-for-joint-slot-filling
Slot-Gated Modeling for Joint Slot Filling and Intent Prediction
Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights. Considering that slot and intent have the strong relationship, this paper proposes a slot gate that focuses on learning the relationship between intent and slot attention vectors in order to obtain better semantic frame results by the global optimization. The experiments show that our proposed model significantly improves sentence-level semantic frame accuracy with 4.2{\%} and 1.9{\%} relative improvement compared to the attentional model on benchmark ATIS and Snips datasets respectively
null
https://www.aclweb.org/anthology/N18-2118/
https://www.aclweb.org/anthology/N18-2118
[ "Intent Detection", "Slot Filling", "Spoken Dialogue Systems", "Spoken Language Understanding" ]
[]
[]
G4JSyhn_Kj
https://paperswithcode.com/paper/tag-embedding-based-personalized-point-of
Tag Embedding Based Personalized Point Of Interest Recommendation System
Personalized Point of Interest recommendation is very helpful for satisfying users' needs at new places. In this article, we propose a tag embedding based method for Personalized Recommendation of Point Of Interest. We model the relationship between tags corresponding to Point Of Interest. The model provides representative embedding corresponds to a tag in a way that related tags will be closer. We model Point of Interest-based on tag embedding and also model the users (user profile) based on the Point Of Interest rated by them. finally, we rank the user's candidate Point Of Interest based on cosine similarity between user's embedding and Point of Interest's embedding. Further, we find the parameters required to model user by discrete optimizing over different measures (like ndcg@5, MRR, ...). We also analyze the result while considering the same parameters for all users and individual parameters for each user. Along with it we also analyze the effect on the result while changing the dataset to model the relationship between tags. Our method also minimizes the privacy leak issue. We used TREC Contextual Suggestion 2016 Phase 2 dataset and have significant improvement over all the measures on the state of the art method. It improves ndcg@5 by 12.8%, p@5 by 4.3%, and MRR by 7.8%, which shows the effectiveness of the method.
2004.06389
https://arxiv.org/abs/2004.06389v1
https://arxiv.org/pdf/2004.06389v1.pdf
[]
[]
[]
Y15jNn_RvU
https://paperswithcode.com/paper/a-self-correcting-deep-learning-approach-to
A Self-Correcting Deep Learning Approach to Predict Acute Conditions in Critical Care
In critical care, intensivists are required to continuously monitor high dimensional vital signs and lab measurements to detect and diagnose acute patient conditions. This has always been a challenging task. In this study, we propose a novel self-correcting deep learning prediction approach to address this challenge. We focus on an example of the prediction of acute kidney injury (AKI). Compared with the existing models, our method has a number of distinct features: we utilized the accumulative data of patients in ICU; we developed a self-correcting mechanism that feeds errors from the previous predictions back into the network; we also proposed a regularization method that takes into account not only the model's prediction error on the label but also its estimation errors on the input data. This mechanism is applied in both regression and classification tasks. We compared the performance of our proposed method with the conventional deep learning models on two real-world clinical datasets and demonstrated that our proposed model constantly outperforms these baseline models. In particular, the proposed model achieved area under ROC curve at 0.893 on the MIMIC III dataset, and 0.871 on the Philips eICU dataset.
1901.04364
http://arxiv.org/abs/1901.04364v1
http://arxiv.org/pdf/1901.04364v1.pdf
[]
[]
[]
b6IQ6mXVu3
https://paperswithcode.com/paper/textimager-a-distributed-uima-based-system
TextImager: a Distributed UIMA-based System for NLP
More and more disciplines require NLP tools for performing automatic text analyses on various levels of linguistic resolution. However, the usage of established NLP frameworks is often hampered for several reasons: in most cases, they require basic to sophisticated programming skills, interfere with interoperability due to using non-standard I/O-formats and often lack tools for visualizing computational results. This makes it difficult especially for humanities scholars to use such frameworks. In order to cope with these challenges, we present TextImager, a UIMA-based framework that offers a range of NLP and visualization tools by means of a user-friendly GUI. Using TextImager requires no programming skills.
null
https://www.aclweb.org/anthology/C16-2013/
https://www.aclweb.org/anthology/C16-2013
[ "Sentiment Analysis", "Text Classification" ]
[]
[]
_NOY3y3RmY
https://paperswithcode.com/paper/a-manually-annotated-chinese-corpus-for-non
A Manually Annotated Chinese Corpus for Non-task-oriented Dialogue Systems
This paper presents a large-scale corpus for non-task-oriented dialogue response selection, which contains over 27K distinct prompts more than 82K responses collected from social media. To annotate this corpus, we define a 5-grade rating scheme: bad, mediocre, acceptable, good, and excellent, according to the relevance, coherence, informativeness, interestingness, and the potential to move a conversation forward. To test the validity and usefulness of the produced corpus, we compare various unsupervised and supervised models for response selection. Experimental results confirm that the proposed corpus is helpful in training response selection models.
1805.05542
http://arxiv.org/abs/1805.05542v1
http://arxiv.org/pdf/1805.05542v1.pdf
[ "Task-Oriented Dialogue Systems" ]
[]
[]
40xyrLzaOA
https://paperswithcode.com/paper/radial-velocity-retrieval-for-multichannel
Radial Velocity Retrieval for Multichannel SAR Moving Targets with Time-Space Doppler De-ambiguity
In this paper, with respect to multichannel synthetic aperture radars (SAR), we first formulate the problems of Doppler ambiguities on the radial velocity (RV) estimation of a ground moving target in range-compressed domain, range-Doppler domain and image domain, respectively. It is revealed that in these problems, a cascaded time-space Doppler ambiguity (CTSDA) may encounter, i.e., time domain Doppler ambiguity (TDDA) in each channel arises first and then spatial domain Doppler ambiguity (SDDA) among multi-channels arises second. Accordingly, the multichannel SAR systems with different parameters are investigated in three different cases with diverse Doppler ambiguity properties, and a multi-frequency SAR is then proposed to obtain the RV estimation by solving the ambiguity problem based on Chinese remainder theorem (CRT). In the first two cases, the ambiguity problem can be solved by the existing closed-form robust CRT. In the third case, it is found that the problem is different from the conventional CRT problems and we call it a double remaindering problem in this paper. We then propose a sufficient condition under which the double remaindering problem, i.e., the CTSDA, can also be solved by the closed-form robust CRT. When the sufficient condition is not satisfied for a multi-channel SAR, a searching based method is proposed. Finally, some results of numerical experiments are provided to demonstrate the effectiveness of the proposed methods.
1610.00070
http://arxiv.org/abs/1610.00070v3
http://arxiv.org/pdf/1610.00070v3.pdf
[]
[]
[]
sbfpN12Pnn
https://paperswithcode.com/paper/a-game-theoretic-analysis-of-additive
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses
Research in adversarial learning follows a cat and mouse game between attackers and defenders where attacks are proposed, they are mitigated by new defenses, and subsequently new attacks are proposed that break earlier defenses, and so on. However, it has remained unclear as to whether there are conditions under which no better attacks or defenses can be proposed. In this paper, we propose a game-theoretic framework for studying attacks and defenses which exist in equilibrium. Under a locally linear decision boundary model for the underlying binary classifier, we prove that the Fast Gradient Method attack and the Randomized Smoothing defense form a Nash Equilibrium. We then show how this equilibrium defense can be approximated given finitely many samples from a data-generating distribution, and derive a generalization bound for the performance of our approximation.
2009.06530
https://arxiv.org/abs/2009.06530v1
https://arxiv.org/pdf/2009.06530v1.pdf
[]
[]
[]
Tjnly3wdX9
https://paperswithcode.com/paper/a-fundamental-performance-limitation-for
A Fundamental Performance Limitation for Adversarial Classification
Despite the widespread use of machine learning algorithms to solve problems of technological, economic, and social relevance, provable guarantees on the performance of these data-driven algorithms are critically lacking, especially when the data originates from unreliable sources and is transmitted over unprotected and easily accessible channels. In this paper we take an important step to bridge this gap and formally show that, in a quest to optimize their accuracy, binary classification algorithms -- including those based on machine-learning techniques -- inevitably become more sensitive to adversarial manipulation of the data. Further, for a given class of algorithms with the same complexity (i.e., number of classification boundaries), the fundamental tradeoff curve between accuracy and sensitivity depends solely on the statistics of the data, and cannot be improved by tuning the algorithm.
1903.01032
http://arxiv.org/abs/1903.01032v2
http://arxiv.org/pdf/1903.01032v2.pdf
[]
[]
[]
GAuRj0APhm
https://paperswithcode.com/paper/undecidability-of-the-lambek-calculus-with
Undecidability of the Lambek calculus with subexponential and bracket modalities
The Lambek calculus is a well-known logical formalism for modelling natural language syntax. The original calculus covered a substantial number of intricate natural language phenomena, but only those restricted to the context-free setting. In order to address more subtle linguistic issues, the Lambek calculus has been extended in various ways. In particular, Morrill and Valentin (2015) introduce an extension with so-called exponential and bracket modalities. Their extension is based on a non-standard contraction rule for the exponential that interacts with the bracket structure in an intricate way. The standard contraction rule is not admissible in this calculus. In this paper we prove undecidability of the derivability problem in their calculus. We also investigate restricted decidable fragments considered by Morrill and Valentin and we show that these fragments belong to the NP class.
1608.04020
http://arxiv.org/abs/1608.04020v2
http://arxiv.org/pdf/1608.04020v2.pdf
[]
[]
[]
n1Qtgt39t6
https://paperswithcode.com/paper/a-parameterized-family-of-meta-submodular
A Parameterized Family of Meta-Submodular Functions
Submodular function maximization has found a wealth of new applications in machine learning models during the past years. The related supermodular maximization models (submodular minimization) also offer an abundance of applications, but they appeared to be highly intractable even under simple cardinality constraints. Hence, while there are well-developed tools for maximizing a submodular function subject to a matroid constraint, there is much less work on the corresponding supermodular maximization problems. We give a broad parameterized family of monotone functions which includes submodular functions and a class of supermodular functions containing diversity functions. Functions in this parameterized family are called \emph{$\gamma$-meta-submodular}. We develop local search algorithms with approximation factors that depend only on the parameter $\gamma$. We show that the $\gamma$-meta-submodular families include well-known classes of functions such as meta-submodular functions ($\gamma=0$), metric diversity functions and proportionally submodular functions (both with $\gamma=1$), diversity functions based on negative-type distances or Jensen-Shannon divergence (both with $\gamma=2$), and $\sigma$-semi metric diversity functions ($\gamma = \sigma$).
2006.13754
https://arxiv.org/abs/2006.13754v1
https://arxiv.org/pdf/2006.13754v1.pdf
[]
[]
[]
MOJEVoXTpV
https://paperswithcode.com/paper/attention-based-recurrent-neural-network
Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling
Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition. In this work, we propose an attention-based neural network model for joint intent detection and slot filling, both of which are critical steps for many speech understanding and dialog systems. Unlike in machine translation and speech recognition, alignment is explicit in slot filling. We explore different strategies in incorporating this alignment information to the encoder-decoder framework. Learning from the attention mechanism in encoder-decoder model, we further propose introducing attention to the alignment-based RNN models. Such attentions provide additional information to the intent classification and slot label prediction. Our independent task models achieve state-of-the-art intent detection error rate and slot filling F1 score on the benchmark ATIS task. Our joint training model further obtains 0.56% absolute (23.8% relative) error reduction on intent detection and 0.23% absolute gain on slot filling over the independent task models.
1609.01454
http://arxiv.org/abs/1609.01454v1
http://arxiv.org/pdf/1609.01454v1.pdf
[ "Intent Classification", "Intent Detection", "Slot Filling" ]
[]
[]
pUvqW1_u--
https://paperswithcode.com/paper/statistical-optimal-transport-via-factored
Statistical Optimal Transport via Factored Couplings
We propose a new method to estimate Wasserstein distances and optimal transport plans between two probability distributions from samples in high dimension. Unlike plug-in rules that simply replace the true distributions by their empirical counterparts, our method promotes couplings with low transport rank, a new structural assumption that is similar to the nonnegative rank of a matrix. Regularizing based on this assumption leads to drastic improvements on high-dimensional data for various tasks, including domain adaptation in single-cell RNA sequencing data. These findings are supported by a theoretical analysis that indicates that the transport rank is key in overcoming the curse of dimensionality inherent to data-driven optimal transport.
1806.07348
http://arxiv.org/abs/1806.07348v3
http://arxiv.org/pdf/1806.07348v3.pdf
[ "Domain Adaptation" ]
[]
[]
wt9AF-3so5
https://paperswithcode.com/paper/election-coding-for-distributed-learning
Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks
Recent advances in large-scale distributed learning algorithms have enabled communication-efficient training via SignSGD. Unfortunately, a major issue continues to plague distributed learning: namely, Byzantine failures may incur serious degradation in learning accuracy. This paper proposes Election Coding, a coding-theoretic framework to guarantee Byzantine-robustness for SignSGD with Majority Vote, which uses minimum worker-master communication in both directions. The suggested framework explores new information-theoretic limits of finding the majority opinion when some workers could be malicious, and paves the road to implement robust and efficient distributed learning algorithms. Under this framework, we construct two types of explicit codes, random Bernoulli codes and deterministic algebraic codes, that can tolerate Byzantine attacks with a controlled amount of computational redundancy. For the Bernoulli codes, we provide upper bounds on the error probability in estimating the majority opinion, which give useful insights into code design for tolerating Byzantine attacks. As for deterministic codes, we construct an explicit code which perfectly tolerates Byzantines, and provide tight upper/lower bounds on the minimum required computational redundancy. Finally, the Byzantine-tolerance of the suggested coding schemes is confirmed by deep learning experiments on Amazon EC2 using Python with MPI4py package.
1910.06093
https://arxiv.org/abs/1910.06093v3
https://arxiv.org/pdf/1910.06093v3.pdf
[]
[]
[]
5gY4KMYJWN
https://paperswithcode.com/paper/a-hybrid-monte-carlo-ant-colony-optimization
A Hybrid Monte Carlo Ant Colony Optimization Approach for Protein Structure Prediction in the HP Model
The hydrophobic-polar (HP) model has been widely studied in the field of protein structure prediction (PSP) both for theoretical purposes and as a benchmark for new optimization strategies. In this work we introduce a new heuristics based on Ant Colony Optimization (ACO) and Markov Chain Monte Carlo (MCMC) that we called Hybrid Monte Carlo Ant Colony Optimization (HMCACO). We describe this method and compare results obtained on well known HP instances in the 3 dimensional cubic lattice to those obtained with standard ACO and Simulated Annealing (SA). All methods were implemented using an unconstrained neighborhood and a modified objective function to prevent the creation of overlapping walks. Results show that our methods perform better than the other heuristics in all benchmark instances.
1309.7690
http://arxiv.org/abs/1309.7690v1
http://arxiv.org/pdf/1309.7690v1.pdf
[]
[]
[]
tCRttuVV0Z
https://paperswithcode.com/paper/multi-resolution-data-fusion-for-super
Multi-resolution Data Fusion for Super-Resolution Electron Microscopy
Perhaps surprisingly, the total electron microscopy (EM) data collected to date is less than a cubic millimeter. Consequently, there is an enormous demand in the materials and biological sciences to image at greater speed and lower dosage, while maintaining resolution. Traditional EM imaging based on homogeneous raster-order scanning severely limits the volume of high-resolution data that can be collected, and presents a fundamental limitation to understanding physical processes such as material deformation, crack propagation, and pyrolysis. We introduce a novel multi-resolution data fusion (MDF) method for super-resolution computational EM. Our method combines innovative data acquisition with novel algorithmic techniques to dramatically improve the resolution/volume/speed trade-off. The key to our approach is to collect the entire sample at low resolution, while simultaneously collecting a small fraction of data at high resolution. The high-resolution measurements are then used to create a material-specific patch-library that is used within the "plug-and-play" framework to dramatically improve super-resolution of the low-resolution data. We present results using FEI electron microscope data that demonstrate super-resolution factors of 4x, 8x, and 16x, while substantially maintaining high image quality and reducing dosage.
1612.00874
http://arxiv.org/abs/1612.00874v1
http://arxiv.org/pdf/1612.00874v1.pdf
[ "Electron Microscopy", "Super Resolution", "Super-Resolution" ]
[]
[]
FHQv8SZ8vk
https://paperswithcode.com/paper/a-comparison-of-information-retrieval
A Comparison of Information Retrieval Techniques for Detecting Source Code Plagiarism
Plagiarism is a commonly encountered problem in the academia. While there are several tools and techniques to efficiently determine plagiarism in text, the same cannot be said about source code plagiarism. To make the existing systems more efficient, we use several information retrieval techniques to find the similarity between source code files written in Java. We later use JPlag, which is a string-based plagiarism detection tool used in academia to match the plagiarized source codes. In this paper, we aim to generalize on the efficiency and effectiveness of detecting plagiarism using different information retrieval models rather than using just string manipulation algorithms.
1902.02407
http://arxiv.org/abs/1902.02407v1
http://arxiv.org/pdf/1902.02407v1.pdf
[ "Information Retrieval" ]
[]
[]
_HxQfmrHbr
https://paperswithcode.com/paper/machine-learning-driven-synthesis-of-few
Machine learning driven synthesis of few-layered WTe2
Reducing the lateral scale of two-dimensional (2D) materials to one-dimensional (1D) has attracted substantial research interest not only to achieve competitive electronic device applications but also for the exploration of fundamental physical properties. Controllable synthesis of high-quality 1D nanoribbons (NRs) is thus highly desirable and essential for the further study. Traditional exploration of the optimal synthesis conditions of novel materials is based on the trial-and-error approach, which is time consuming, costly and laborious. Recently, machine learning (ML) has demonstrated promising capability in guiding material synthesis through effectively learning from the past data and then making recommendations. Here, we report the implementation of supervised ML for the chemical vapor deposition (CVD) synthesis of high-quality 1D few-layered WTe2 nanoribbons (NRs). The synthesis parameters of the WTe2 NRs are optimized by the trained ML model. On top of that, the growth mechanism of as-synthesized 1T' few-layered WTe2 NRs is further proposed, which may inspire the growth strategies for other 1D nanostructures. Our findings suggest that ML is a powerful and efficient approach to aid the synthesis of 1D nanostructures, opening up new opportunities for intelligent material development.
1910.04603
https://arxiv.org/abs/1910.04603v1
https://arxiv.org/pdf/1910.04603v1.pdf
[]
[]
[]
s45-fsjOWp
https://paperswithcode.com/paper/a-survey-on-domain-adaptation-theory
A survey on domain adaptation theory: learning bounds and theoretical guarantees
All famous machine learning algorithms that comprise both supervised and semi-supervised learning work well only under a common assumption: the training and test data follow the same distribution. When the distribution changes, most statistical models must be reconstructed from newly collected data, which for some applications can be costly or impossible to obtain. Therefore, it has become necessary to develop approaches that reduce the need and the effort to obtain new labeled samples by exploiting data that are available in related areas, and using these further across similar fields. This has given rise to a new machine learning framework known as transfer learning: a learning setting inspired by the capability of a human being to extrapolate knowledge across tasks to learn more efficiently. Despite a large amount of different transfer learning scenarios, the main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning, called domain adaptation. In this sub-field, the data distribution is assumed to change across the training and the test data, while the learning task remains the same. We provide a first up-to-date description of existing results related to domain adaptation problem that cover learning bounds based on different statistical learning frameworks.
2004.11829
https://arxiv.org/abs/2004.11829v5
https://arxiv.org/pdf/2004.11829v5.pdf
[ "Domain Adaptation", "Transfer Learning" ]
[]
[]
Iyt0SaCfxE
https://paperswithcode.com/paper/geometric-learning-and-topological-inference
Geometric Learning and Topological Inference with Biobotic Networks: Convergence Analysis
In this study, we present and analyze a framework for geometric and topological estimation for mapping of unknown environments. We consider agents mimicking motion behaviors of cyborg insects, known as biobots, and exploit coordinate-free local interactions among them to infer geometric and topological information about the environment, under minimal sensing and localization constraints. Local interactions are used to create a graphical representation referred to as the encounter graph. A metric is estimated over the encounter graph of the agents in order to construct a geometric point cloud using manifold learning techniques. Topological data analysis (TDA), in particular persistent homology, is used in order to extract topological features of the space and a classification method is proposed to infer robust features of interest (e.g. existence of obstacles). We examine the asymptotic behavior of the proposed metric in terms of the convergence to the geodesic distances in the underlying manifold of the domain, and provide stability analysis results for the topological persistence. The proposed framework and its convergences and stability analysis are demonstrated through numerical simulations and experiments.
1607.00051
http://arxiv.org/abs/1607.00051v1
http://arxiv.org/pdf/1607.00051v1.pdf
[ "Topological Data Analysis" ]
[]
[]
ht6haRsvVR
https://paperswithcode.com/paper/a-parallel-memory-efficient-epistemic-logic
A Parallel Memory-efficient Epistemic Logic Program Solver: Harder, Better, Faster
As the practical use of answer set programming (ASP) has grown with the development of efficient solvers, we expect a growing interest in extensions of ASP as their semantics stabilize and solvers supporting them mature. Epistemic Specifications, which adds modal operators K and M to the language of ASP, is one such extension. We call a program in this language an epistemic logic program (ELP). Solvers have thus far been practical for only the simplest ELPs due to exponential growth of the search space. We describe a solver that is able to solve harder problems better (e.g., without exponentially-growing memory needs w.r.t. K and M occurrences) and faster than any other known ELP solver.
1608.06910
http://arxiv.org/abs/1608.06910v2
http://arxiv.org/pdf/1608.06910v2.pdf
[]
[]
[]
H5szLqaUFs
https://paperswithcode.com/paper/distributed-learning-with-infinitely-many
Distributed Learning with Infinitely Many Hypotheses
We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. Apart from recent efforts in the literature, we analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed Non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.
1605.02105
http://arxiv.org/abs/1605.02105v1
http://arxiv.org/pdf/1605.02105v1.pdf
[]
[]
[]
Cd2-Fo5XYm
https://paperswithcode.com/paper/outlier-guided-optimization-of-abdominal
Outlier Guided Optimization of Abdominal Segmentation
Abdominal multi-organ segmentation of computed tomography (CT) images has been the subject of extensive research interest. It presents a substantial challenge in medical image processing, as the shape and distribution of abdominal organs can vary greatly among the population and within an individual over time. While continuous integration of novel datasets into the training set provides potential for better segmentation performance, collection of data at scale is not only costly, but also impractical in some contexts. Moreover, it remains unclear what marginal value additional data have to offer. Herein, we propose a single-pass active learning method through human quality assurance (QA). We built on a pre-trained 3D U-Net model for abdominal multi-organ segmentation and augmented the dataset either with outlier data (e.g., exemplars for which the baseline algorithm failed) or inliers (e.g., exemplars for which the baseline algorithm worked). The new models were trained using the augmented datasets with 5-fold cross-validation (for outlier data) and withheld outlier samples (for inlier data). Manual labeling of outliers increased Dice scores with outliers by 0.130, compared to an increase of 0.067 with inliers (p<0.001, two-tailed paired t-test). By adding 5 to 37 inliers or outliers to training, we find that the marginal value of adding outliers is higher than that of adding inliers. In summary, improvement on single-organ performance was obtained without diminishing multi-organ performance or significantly increasing training time. Hence, identification and correction of baseline failure cases present an effective and efficient method of selecting training data to improve algorithm performance.
2002.04098
https://arxiv.org/abs/2002.04098v1
https://arxiv.org/pdf/2002.04098v1.pdf
[ "Active Learning", "Computed Tomography (CT)" ]
[ "Concatenated Skip Connection", "ReLU", "Max Pooling", "Convolution", "U-Net" ]
[]
eWcDFq_N-B
https://paperswithcode.com/paper/automatic-generation-of-algorithms-for-black
Automatic Generation of Algorithms for Black-Box Robust Optimisation Problems
We develop algorithms capable of tackling robust black-box optimisation problems, where the number of model runs is limited. When a desired solution cannot be implemented exactly the aim is to find a robust one, where the worst case in an uncertainty neighbourhood around a solution still performs well. This requires a local maximisation within a global minimisation. To investigate improved optimisation methods for robust problems, and remove the need to manually determine an effective heuristic and parameter settings, we employ an automatic generation of algorithms approach: Grammar-Guided Genetic Programming. We develop algorithmic building blocks to be implemented in a Particle Swarm Optimisation framework, define the rules for constructing heuristics from these components, and evolve populations of search algorithms. Our algorithmic building blocks combine elements of existing techniques and new features, resulting in the investigation of a novel heuristic solution space. As a result of this evolutionary process we obtain algorithms which improve upon the current state of the art. We also analyse the component level breakdowns of the populations of algorithms developed against their performance, to identify high-performing heuristic components for robust problems.
2004.07294
https://arxiv.org/abs/2004.07294v1
https://arxiv.org/pdf/2004.07294v1.pdf
[]
[]
[]
W1RaStCQMx
https://paperswithcode.com/paper/semantic-discord-finding-unusual-local
Semantic Discord: Finding Unusual Local Patterns for Time Series
Finding anomalous subsequence in a long time series is a very important but difficult problem. Existing state-of-the-art methods have been focusing on searching for the subsequence that is the most dissimilar to the rest of the subsequences; however, they do not take into account the background patterns that contain the anomalous candidates. As a result, such approaches are likely to miss local anomalies. We introduce a new definition named \textit{semantic discord}, which incorporates the context information from larger subsequences containing the anomaly candidates. We propose an efficient algorithm with a derived lower bound that is up to 3 orders of magnitude faster than the brute force algorithm in real world data. We demonstrate that our method significantly outperforms the state-of-the-art methods in locating anomalies by extensive experiments. We further explain the interpretability of semantic discord.
2001.11842
https://arxiv.org/abs/2001.11842v2
https://arxiv.org/pdf/2001.11842v2.pdf
[ "Time Series" ]
[]
[]
8lQDYvNCD9
https://paperswithcode.com/paper/sockpuppet-detection-in-wikipedia-a-corpus-of
Sockpuppet Detection in Wikipedia: A Corpus of Real-World Deceptive Writing for Linking Identities
This paper describes the corpus of sockpuppet cases we gathered from Wikipedia. A sockpuppet is an online user account created with a fake identity for the purpose of covering abusive behavior and/or subverting the editing regulation process. We used a semi-automated method for crawling and curating a dataset of real sockpuppet investigation cases. To the best of our knowledge, this is the first corpus available on real-world deceptive writing. We describe the process for crawling the data and some preliminary results that can be used as baseline for benchmarking research. The dataset will be released under a Creative Commons license from our project website: http://docsig.cis.uab.edu.
1310.6772
http://arxiv.org/abs/1310.6772v1
http://arxiv.org/pdf/1310.6772v1.pdf
[]
[]
[]
5N_8UtE2ez
https://paperswithcode.com/paper/on-dropout-overfitting-and-interaction
On Dropout, Overfitting, and Interaction Effects in Deep Neural Networks
We examine Dropout through the perspective of interactions: learned effects that combine multiple input variables. Given $N$ variables, there are $O(N^2)$ possible pairwise interactions, $O(N^3)$ possible 3-way interactions, etc. We show that Dropout implicitly sets a learning rate for interaction effects that decays exponentially with the size of the interaction, corresponding to a regularizer that balances against the hypothesis space which grows exponentially with number of variables in the interaction. This understanding of Dropout has implications for the optimal Dropout rate: higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions. This perspective also issues caution against using Dropout to measure term saliency because Dropout regularizes against terms for high-order interactions. Finally, this view of Dropout as a regularizer of interaction effects provides insight into the varying effectiveness of Dropout for different architectures and data sets. We also compare Dropout to regularization via weight decay and early stopping and find that it is difficult to obtain the same regularization effect for high-order interactions with these methods.
2007.00823
https://arxiv.org/abs/2007.00823v1
https://arxiv.org/pdf/2007.00823v1.pdf
[]
[ "Weight Decay", "Early Stopping", "Dropout" ]
[]
7kubH1Mo3K
https://paperswithcode.com/paper/dream-a-challenge-data-set-and-models-for
DREAM: A Challenge Data Set and Models for Dialogue-Based Reading Comprehension
We present DREAM, the first dialogue-based multiple-choice reading comprehension data set. Collected from English as a Foreign Language examinations designed by human experts to evaluate the comprehension level of Chinese learners of English, our data set contains 10,197 multiple-choice questions for 6,444 dialogues. In contrast to existing reading comprehension data sets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding. DREAM is likely to present significant challenges for existing reading comprehension systems: 84{\%} of answers are non-extractive, 85{\%} of questions require reasoning beyond a single sentence, and 34{\%} of questions also involve commonsense knowledge. We apply several popular neural reading comprehension models that primarily exploit surface information within the text and find them to, at best, just barely outperform a rule-based approach. We next investigate the effects of incorporating dialogue structure and different kinds of general world knowledge into both rule-based and (neural and non-neural) machine learning-based reading comprehension models. Experimental results on the DREAM data set show the effectiveness of dialogue structure and general world knowledge. DREAM is available at https://dataset.org/dream/.
null
https://www.aclweb.org/anthology/Q19-1014/
https://www.aclweb.org/anthology/Q19-1014
[ "Dialogue Understanding", "Reading Comprehension" ]
[]
[]
rgVg36W7TV
https://paperswithcode.com/paper/heavy-hitters-via-cluster-preserving
Heavy hitters via cluster-preserving clustering
In turnstile $\ell_p$ $\varepsilon$-heavy hitters, one maintains a high-dimensional $x\in\mathbb{R}^n$ subject to $\texttt{update}(i,\Delta)$ causing $x_i\leftarrow x_i + \Delta$, where $i\in[n]$, $\Delta\in\mathbb{R}$. Upon receiving a query, the goal is to report a small list $L\subset[n]$, $|L| = O(1/\varepsilon^p)$, containing every "heavy hitter" $i\in[n]$ with $|x_i| \ge \varepsilon \|x_{\overline{1/\varepsilon^p}}\|_p$, where $x_{\overline{k}}$ denotes the vector obtained by zeroing out the largest $k$ entries of $x$ in magnitude. For any $p\in(0,2]$ the CountSketch solves $\ell_p$ heavy hitters using $O(\varepsilon^{-p}\log n)$ words of space with $O(\log n)$ update time, $O(n\log n)$ query time to output $L$, and whose output after any query is correct with high probability (whp) $1 - 1/poly(n)$. Unfortunately the query time is very slow. To remedy this, the work [CM05] proposed for $p=1$ in the strict turnstile model, a whp correct algorithm achieving suboptimal space $O(\varepsilon^{-1}\log^2 n)$, worse update time $O(\log^2 n)$, but much better query time $O(\varepsilon^{-1}poly(\log n))$. We show this tradeoff between space and update time versus query time is unnecessary. We provide a new algorithm, ExpanderSketch, which in the most general turnstile model achieves optimal $O(\varepsilon^{-p}\log n)$ space, $O(\log n)$ update time, and fast $O(\varepsilon^{-p}poly(\log n))$ query time, and whp correctness. Our main innovation is an efficient reduction from the heavy hitters to a clustering problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster. Since every heavy hitter must be found, correctness requires that every cluster be found. We then develop a "cluster-preserving clustering" algorithm, partitioning the graph into clusters without destroying any original cluster.
1604.01357
http://arxiv.org/abs/1604.01357v1
http://arxiv.org/pdf/1604.01357v1.pdf
[]
[]
[]
4JJvCfz61Q
https://paperswithcode.com/paper/what-does-it-mean-to-solve-the-problem-of
What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems
The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of bias mitigation are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective. In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.
1910.06144
https://arxiv.org/abs/1910.06144v2
https://arxiv.org/pdf/1910.06144v2.pdf
[]
[]
[]
TwP82tP1B2
https://paperswithcode.com/paper/hierarchical-modeling-and-shrinkage-for-user
Hierarchical Modeling and Shrinkage for User Session Length Prediction in Media Streaming
An important metric of users' satisfaction and engagement within on-line streaming services is the user session length, i.e. the amount of time they spend on a service continuously without interruption. Being able to predict this value directly benefits the recommendation and ad pacing contexts in music and video streaming services. Recent research has shown that predicting the exact amount of time spent is highly nontrivial due to many external factors for which a user can end a session, and the lack of predictive covariates. Most of the other related literature on duration based user engagement has focused on dwell time for websites, for search and display ads, mainly for post-click satisfaction prediction or ad ranking. In this work we present a novel framework inspired by hierarchical Bayesian modeling to predict, at the moment of login, the amount of time a user will spend in the streaming service. The time spent by a user on a platform depends upon user-specific latent variables which are learned via hierarchical shrinkage. Our framework enjoys theoretical guarantees and naturally incorporates flexible parametric/nonparametric models on the covariates, including models robust to outliers. Our proposal is found to outperform state-of- the-art estimators in terms of efficiency and predictive performance on real world public and private datasets.
1803.01440
http://arxiv.org/abs/1803.01440v2
http://arxiv.org/pdf/1803.01440v2.pdf
[]
[]
[]
0mFlytoqDq
https://paperswithcode.com/paper/discrete-potts-model-for-generating
Discrete Potts Model for Generating Superpixels on Noisy Images
Many computer vision applications, such as object recognition and segmentation, increasingly build on superpixels. However, there have been so far few superpixel algorithms that systematically deal with noisy images. We propose to first decompose the image into equal-sized rectangular patches, which also sets the maximum superpixel size. Within each patch, a Potts model for simultaneous segmentation and denoising is applied, that guarantees connected and non-overlapping superpixels and also produces a denoised image. The corresponding optimization problem is formulated as a mixed integer linear program (MILP), and solved by a commercial solver. Extensive experiments on the BSDS500 dataset images with noises are compared with other state-of-the-art superpixel methods. Our method achieves the best result in terms of a combined score (OP) composed of the under-segmentation error, boundary recall and compactness.
1803.07351
http://arxiv.org/abs/1803.07351v1
http://arxiv.org/pdf/1803.07351v1.pdf
[ "Denoising", "Object Recognition" ]
[]
[]
GwH-pi0pKA
https://paperswithcode.com/paper/a-path-towards-quantum-advantage-in-training
A Path Towards Quantum Advantage in Training Deep Generative Models with Quantum Annealers
The development of quantum-classical hybrid (QCH) algorithms is critical to achieve state-of-the-art computational models. A QCH variational autoencoder (QVAE) was introduced in Ref. [1] by some of the authors of this paper. QVAE consists of a classical auto-encoding structure realized by traditional deep neural networks to perform inference to, and generation from, a discrete latent space. The latent generative process is formalized as thermal sampling from either a quantum or classical Boltzmann machine (QBM or BM). This setup allows quantum-assisted training of deep generative models by physically simulating the generative process with quantum annealers. In this paper, we have successfully employed D-Wave quantum annealers as Boltzmann samplers to perform quantum-assisted, end-to-end training of QVAE. The hybrid structure of QVAE allows us to deploy current-generation quantum annealers in QCH generative models to achieve competitive performance on datasets such as MNIST. The results presented in this paper suggest that commercially available quantum annealers can be deployed, in conjunction with well-crafted classical deep neutral networks, to achieve competitive results in unsupervised and semisupervised tasks on large-scale datasets. We also provide evidence that our setup is able to exploit large latent-space (Q)BMs, which develop slowly mixing modes. This expressive latent space results in slow and inefficient classical sampling, and paves the way to achieve quantum advantage with quantum annealing in realistic sampling applications.
1912.02119
https://arxiv.org/abs/1912.02119v1
https://arxiv.org/pdf/1912.02119v1.pdf
[]
[ "AutoEncoder" ]
[]
-KDZXl-Grv
https://paperswithcode.com/paper/hhu-at-semeval-2019-task-6-context-does
HHU at SemEval-2019 Task 6: Context Does Matter - Tackling Offensive Language Identification and Categorization with ELMo
We present our results for OffensEval: Identifying and Categorizing Offensive Language in Social Media (SemEval 2019 - Task 6). Our results show that context embeddings are important features for the three different sub-tasks in connection with classical machine and with deep learning. Our best model reached place 3 of 75 in sub-task B with a macro $F_1$ of 0.719. Our approaches for sub-task A and C perform less well but could also deliver promising results.
null
https://www.aclweb.org/anthology/S19-2112/
https://www.aclweb.org/anthology/S19-2112
[ "Language Identification" ]
[]
[]
b35rdg_Mv6
https://paperswithcode.com/paper/sold-sub-optimal-low-rank-decomposition-for
SOLD: Sub-Optimal Low-rank Decomposition for Efficient Video Segmentation
This paper investigates how to perform robust and efficient unsupervised video segmentation while suppressing the effects of data noises and/or corruptions. We propose a general algorithm, called Sub-Optimal Low-rank Decomposition (SOLD), which pursues the low-rank representation for video segmentation. Given the supervoxels affinity matrix of an observed video sequence, SOLD seeks a sub-optimal solution by making the matrix rank explicitly determined. In particular, the affinity matrix with the rank fixed can be decomposed into two sub-matrices of low rank, and then we iteratively optimize them with closed-form solutions. Moreover, we incorporate a discriminative replication prior into our framework based on the obervation that small-size video patterns tend to recur frequently within the same object. The video can be segmented into several spatio-temporal regions by applying the Normalized-Cut (NCut) algorithm with the solved low-rank representation. To process the streaming videos, we apply our algorithm sequentially over a batch of frames over time, in which we also develop several temporal consistent constraints improving the robustness. Extensive experiments on the public benchmarks demonstrate superior performance of our framework over other state-of-the-art approaches.
null
http://openaccess.thecvf.com/content_cvpr_2015/html/Li_SOLD_Sub-Optimal_Low-rank_2015_CVPR_paper.html
http://openaccess.thecvf.com/content_cvpr_2015/papers/Li_SOLD_Sub-Optimal_Low-rank_2015_CVPR_paper.pdf
[ "Video Segmentation", "Video Semantic Segmentation" ]
[]
[]
BUr1ldXfAZ
https://paperswithcode.com/paper/a-survey-of-end-to-end-driving-architectures
A Survey of End-to-End Driving: Architectures and Training Methods
Autonomous driving is of great interest to industry and academia alike. The use of machine learning approaches for autonomous driving has long been studied, but mostly in the context of perception. In this paper we take a deeper look on the so called end-to-end approaches for autonomous driving, where the entire driving pipeline is replaced with a single neural network. We review the learning methods, input and output modalities, network architectures and evaluation schemes in end-to-end driving literature. Interpretability and safety are discussed separately, as they remain challenging for this approach. Beyond providing a comprehensive overview of existing methods, we conclude the review with an architecture that combines the most promising elements of the end-to-end autonomous driving systems.
2003.06404
https://arxiv.org/abs/2003.06404v1
https://arxiv.org/pdf/2003.06404v1.pdf
[ "Autonomous Driving" ]
[]
[]
zKqd-edmpG
https://paperswithcode.com/paper/modeling-nanoconfinement-effects-using-active
Modeling nanoconfinement effects using active learning
Predicting the spatial configuration of gas molecules in nanopores of shale formations is crucial for fluid flow forecasting and hydrocarbon reserves estimation. The key challenge in these tight formations is that the majority of the pore sizes are less than 50 nm. At this scale, the fluid properties are affected by nanoconfinement effects due to the increased fluid-solid interactions. For instance, gas adsorption to the pore walls could account for up to 85% of the total hydrocarbon volume in a tight reservoir. Although there are analytical solutions that describe this phenomenon for simple geometries, they are not suitable for describing realistic pores, where surface roughness and geometric anisotropy play important roles. To describe these, molecular dynamics (MD) simulations are used since they consider fluid-solid and fluid-fluid interactions at the molecular level. However, MD simulations are computationally expensive, and are not able to simulate scales larger than a few connected nanopores. We present a method for building and training physics-based deep learning surrogate models to carry out fast and accurate predictions of molecular configurations of gas inside nanopores. Since training deep learning models requires extensive databases that are computationally expensive to create, we employ active learning (AL). AL reduces the overhead of creating comprehensive sets of high-fidelity data by determining where the model uncertainty is greatest, and running simulations on the fly to minimize it. The proposed workflow enables nanoconfinement effects to be rigorously considered at the mesoscale where complex connected sets of nanopores control key applications such as hydrocarbon recovery and CO2 sequestration.
2005.02587
https://arxiv.org/abs/2005.02587v2
https://arxiv.org/pdf/2005.02587v2.pdf
[ "Active Learning" ]
[]
[]
YMy8y6ZCpB
https://paperswithcode.com/paper/deepiso-a-deep-learning-model-for-peptide
DeepIso: A Deep Learning Model for Peptide Feature Detection
Liquid chromatography with tandem mass spectrometry (LC-MS/MS) based proteomics is a well-established research field with major applications such as identification of disease biomarkers, drug discovery, drug design and development. In proteomics, protein identification and quantification is a fundamental task, which is done by first enzymatically digesting it into peptides, and then analyzing peptides by LC-MS/MS instruments. The peptide feature detection and quantification from an LC-MS map is the first step in typical analysis workflows. In this paper we propose a novel deep learning based model, DeepIso, that uses Convolutional Neural Networks (CNNs) to scan an LC-MS map to detect peptide features and estimate their abundance. Existing tools are often designed with limited engineered features based on domain knowledge, and depend on pretrained parameters which are hardly updated despite huge amount of new coming proteomic data. Our proposed model, on the other hand, is capable of learning multiple levels of representation of high dimensional data through its many layers of neurons and continuously evolving with newly acquired data. To evaluate our proposed model, we use an antibody dataset including a heavy and a light chain, each digested by Asp-N, Chymotrypsin, Trypsin, thus giving six LC-MS maps for the experiment. Our model achieves 93.21% sensitivity with specificity of 99.44% on this dataset. Our results demonstrate that novel deep learning tools are desirable to advance the state-of-the-art in protein identification and quantification.
1801.01539
http://arxiv.org/abs/1801.01539v1
http://arxiv.org/pdf/1801.01539v1.pdf
[ "Drug Discovery" ]
[]
[]
MpQXSSQuDw
https://paperswithcode.com/paper/evidence-based-explanation-to-promote
Evidence-based explanation to promote fairness in AI systems
As Artificial Intelligence (AI) technology gets more intertwined with every system, people are using AI to make decisions on their everyday activities. In simple contexts, such as Netflix recommendations, or in more complex context like in judicial scenarios, AI is part of people's decisions. People make decisions and usually, they need to explain their decision to others or in some matter. It is particularly critical in contexts where human expertise is central to decision-making. In order to explain their decisions with AI support, people need to understand how AI is part of that decision. When considering the aspect of fairness, the role that AI has on a decision-making process becomes even more sensitive since it affects the fairness and the responsibility of those people making the ultimate decision. We have been exploring an evidence-based explanation design approach to 'tell the story of a decision'. In this position paper, we discuss our approach for AI systems using fairness sensitive cases in the literature.
2003.01525
https://arxiv.org/abs/2003.01525v1
https://arxiv.org/pdf/2003.01525v1.pdf
[ "Decision Making", "fairness" ]
[]
[]
N8ciErVgPB
https://paperswithcode.com/paper/change-your-singer-a-transfer-learning
Change your singer: a transfer learning generative adversarial framework for song to song conversion
Have you ever wondered how a song might sound if performed by a different artist? In this work, we propose SCM-GAN, an end-to-end non-parallel song conversion system powered by generative adversarial and transfer learning that allows users to listen to a selected target singer singing any song. SCM-GAN first separates songs into vocals and instrumental music using a U-Net network, then converts the vocal segments to the target singer using advanced CycleGAN-VC, before merging the converted vocals with their corresponding background music. SCM-GAN is first initialized with feature representations learned from a state-of-the-art voice-to-voice conversion and then trained on a dataset of non-parallel songs. Furthermore, SCM-GAN is evaluated against a set of metrics including global variance GV and modulation spectra MS on the 24 Mel-cepstral coefficients (MCEPs). Transfer learning improves the GV by 35% and the MS by 13% on average. A subjective comparison is conducted to test the user satisfaction with the quality and the naturalness of the conversion. Results show above par similarity between SCM-GAN's output and the target (70\% on average) as well as great naturalness of the converted songs.
1911.02933
https://arxiv.org/abs/1911.02933v2
https://arxiv.org/pdf/1911.02933v2.pdf
[ "Transfer Learning", "Voice Conversion" ]
[ "Concatenated Skip Connection", "ReLU", "Max Pooling", "Convolution", "U-Net" ]
[]
ykpBfsdFld
https://paperswithcode.com/paper/transforming-spectrum-and-prosody-for
Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data
Emotional voice conversion aims to convert the spectrum and prosody to change the emotional patterns of speech, while preserving the speaker identity and linguistic content. Many studies require parallel speech data between different emotional patterns, which is not practical in real life. Moreover, they often model the conversion of fundamental frequency (F0) with a simple linear transform. As F0 is a key aspect of intonation that is hierarchical in nature, we believe that it is more adequate to model F0 in different temporal scales by using wavelet transform. We propose a CycleGAN network to find an optimal pseudo pair from non-parallel training data by learning forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. We also study the use of continuous wavelet transform (CWT) to decompose F0 into ten temporal scales, that describes speech prosody at different time resolution, for effective F0 conversion. Experimental results show that our proposed framework outperforms the baselines both in objective and subjective evaluations.
2002.00198
https://arxiv.org/abs/2002.00198v4
https://arxiv.org/pdf/2002.00198v4.pdf
[ "Voice Conversion" ]
[ "Batch Normalization", "Residual Connection", "PatchGAN", "ReLU", "Tanh Activation", "Residual Block", "Instance Normalization", "Convolution", "Leaky ReLU", "Sigmoid Activation", "GAN Least Squares Loss", "Cycle Consistency Loss", "CycleGAN" ]
[]
c98LHxRuC9
https://paperswithcode.com/paper/droidstar-callback-typestates-for-android
DroidStar: Callback Typestates for Android Classes
Event-driven programming frameworks, such as Android, are based on components with asynchronous interfaces. The protocols for interacting with these components can often be described by finite-state machines we dub *callback typestates*. Callback typestates are akin to classical typestates, with the difference that their outputs (callbacks) are produced asynchronously. While useful, these specifications are not commonly available, because writing them is difficult and error-prone. Our goal is to make the task of producing callback typestates significantly easier. We present a callback typestate assistant tool, DroidStar, that requires only limited user interaction to produce a callback typestate. Our approach is based on an active learning algorithm, L*. We improved the scalability of equivalence queries (a key component of L*), thus making active learning tractable on the Android system. We use DroidStar to learn callback typestates for Android classes both for cases where one is already provided by the documentation, and for cases where the documentation is unclear. The results show that DroidStar learns callback typestates accurately and efficiently. Moreover, in several cases, the synthesized callback typestates uncovered surprising and undocumented behaviors.
1701.07842
http://arxiv.org/abs/1701.07842v3
http://arxiv.org/pdf/1701.07842v3.pdf
[ "Active Learning" ]
[]
[]
0rDgd-odj-
https://paperswithcode.com/paper/empirical-risk-minimization-is-consistent
Empirical risk minimization is consistent with the mean absolute percentage error
We study in this paper the consequences of using the Mean Absolute Percentage Error (MAPE) as a measure of quality for regression models. We show that finding the best model under the MAPE is equivalent to doing weighted Mean Absolute Error (MAE) regression. We also show that, under some asumptions, universal consistency of Empirical Risk Minimization remains possible using the MAPE.
1509.02357
http://arxiv.org/abs/1509.02357v1
http://arxiv.org/pdf/1509.02357v1.pdf
[]
[]
[]