paper_id
stringlengths
10
10
paper_url
stringlengths
37
80
title
stringlengths
4
518
abstract
stringlengths
3
7.27k
arxiv_id
stringlengths
9
16
url_abs
stringlengths
18
601
url_pdf
stringlengths
21
601
aspect_tasks
sequence
aspect_methods
sequence
aspect_datasets
sequence
B32MV46FTA
https://paperswithcode.com/paper/efficient-machine-learning-for-large-scale
Efficient Machine Learning for Large-Scale Urban Land-Use Forecasting in Sub-Saharan Africa
Urbanization is a common phenomenon in developing countries and it poses serious challenges when not managed effectively. Lack of proper planning and management may cause the encroachment of urban fabrics into reserved or special regions which in turn can lead to an unsustainable increase in population. Ineffective management and planning generally leads to depreciated standard of living, where physical hazards like traffic accidents and disease vector breeding become prevalent. In order to support urban planners and policy makers in effective planning and accurate decision making, we investigate urban land-use in sub-Saharan Africa. Land-use dynamics serves as a crucial parameter in current strategies and policies for natural resource management and monitoring. Focusing on Nairobi, we use an efficient deep learning approach with patch-based prediction to classify regions based on land-use from 2004 to 2018 on a quarterly basis. We estimate changes in land-use within this period, and using the Autoregressive Integrated Moving Average (ARIMA) model, our results forecast land-use for a given future date. Furthermore, we provide labelled land-use maps which will be helpful to urban planners.
1908.00340
https://arxiv.org/abs/1908.00340v1
https://arxiv.org/pdf/1908.00340v1.pdf
[ "Decision Making" ]
[]
[]
jhB8you7CK
https://paperswithcode.com/paper/one-step-regression-and-classification-with
One-step regression and classification with crosspoint resistive memory arrays
Machine learning has been getting a large attention in the recent years, as a tool to process big data generated by ubiquitous sensors in our daily life. High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge, i.e., without the support of a remote frame server in the cloud. Such requirements challenge the complementary metal-oxide-semiconductor (CMOS) technology, which is limited by the Moore's law approaching its end and the communication bottleneck in conventional computing architecture. Novel computing concepts, architectures and devices are thus strongly needed to accelerate data-intensive applications. Here we show a crosspoint resistive memory circuit with feedback configuration can execute linear regression and logistic regression in just one step by computing the pseudoinverse matrix of the data within the memory. The most elementary learning operation, that is the regression of a sequence of data and the classification of a set of data, can thus be executed in one single computational step by the novel technology. One-step learning is further supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition. The results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
2005.01988
https://arxiv.org/abs/2005.01988v1
https://arxiv.org/pdf/2005.01988v1.pdf
[]
[ "Logistic Regression", "Linear Regression" ]
[]
CAWVGbbavA
https://paperswithcode.com/paper/moralstrength-exploiting-a-moral-lexicon-and
MoralStrength: Exploiting a Moral Lexicon and Embedding Similarity for Moral Foundations Prediction
Moral rhetoric plays a fundamental role in how we perceive and interpret the information we receive, greatly influencing our decision-making process. Especially when it comes to controversial social and political issues, our opinions and attitudes are hardly ever based on evidence alone. The Moral Foundations Dictionary (MFD) was developed to operationalize moral values in the text. In this study, we present MoralStrength, a lexicon of approximately 1,000 lemmas, obtained as an extension of the Moral Foundations Dictionary, based on WordNet synsets. Moreover, for each lemma it provides with a crowdsourced numeric assessment of Moral Valence, indicating the strength with which a lemma is expressing the specific value. We evaluated the predictive potentials of this moral lexicon, defining three utilization approaches of increased complexity, ranging from lemmas' statistical properties to a deep learning approach of word embeddings based on semantic similarity. Logistic regression models trained on the features extracted from MoralStrength, significantly outperformed the current state-of-the-art, reaching an F1-score of 87.6% over the previous 62.4% (p-value<0.01), and an average F1-Score of 86.25% over six different datasets. Such findings pave the way for further research, allowing for an in-depth understanding of moral narratives in text for a wide range of social issues.
1904.08314
https://arxiv.org/abs/1904.08314v2
https://arxiv.org/pdf/1904.08314v2.pdf
[ "Decision Making", "Semantic Similarity", "Semantic Textual Similarity", "Word Embeddings" ]
[ "Logistic Regression" ]
[]
Q3OdGFEDxn
https://paperswithcode.com/paper/gtea-representation-learning-for-temporal
GTEA: Representation Learning for Temporal Interaction Graphs via Edge Aggregation
We consider the problem of representation learning for temporal interaction graphs where a network of entities with complex interactions over an extended period of time is modeled as a graph with a rich set of node and edge attributes. In particular, an edge between a node-pair within the graph corresponds to a multi-dimensional time-series. To fully capture and model the dynamics of the network, we propose GTEA, a framework of representation learning for temporal interaction graphs with per-edge time-based aggregation. Under GTEA, a Graph Neural Network (GNN) is integrated with a state-of-the-art sequence model, such as LSTM, Transformer and their time-aware variants. The sequence model generates edge embeddings to encode temporal interaction patterns between each pair of nodes, while the GNN-based backbone learns the topological dependencies and relationships among different nodes. GTEA also incorporates a sparsity-inducing self-attention mechanism to distinguish and focus on the more important neighbors of each node during the aggregation process. By capturing temporal interactive dynamics together with multi-dimensional node and edge attributes in a network, GTEA can learn fine-grained representations for a temporal interaction graph to enable or facilitate other downstream data analytic tasks. Experimental results show that GTEA outperforms state-of-the-art schemes including GraphSAGE, APPNP, and TGAT by delivering higher accuracy (100.00%, 98.51%, 98.05% ,79.90%) and macro-F1 score (100.00%, 98.51%, 96.68% ,79.90%) over four large-scale real-world datasets for binary/ multi-class node classification.
2009.05266
https://arxiv.org/abs/2009.05266v2
https://arxiv.org/pdf/2009.05266v2.pdf
[ "Node Classification", "Representation Learning", "Time Series" ]
[ "Sigmoid Activation", "Layer Normalization", "Tanh Activation", "LSTM", "Dropout", "Dense Connections", "BPE", "Label Smoothing", "Multi-Head Attention", "Scaled Dot-Product Attention", "Adam", "Residual Connection", "Softmax", "Transformer" ]
[]
xz7K7I57fc
https://paperswithcode.com/paper/sygus-comp-2017-results-and-analysis
SyGuS-Comp 2017: Results and Analysis
Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula phi in a background theory T, and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations. Such a synthesis problem can be formally defined in SyGuS-IF, a language that is built on top of SMT-LIB. The Syntax-Guided Synthesis Competition (SyGuS-Comp) is an effort to facilitate, bring together and accelerate research and development of efficient solvers for SyGuS by providing a platform for evaluating different synthesis techniques on a comprehensive set of benchmarks. In this year's competition six new solvers competed on over 1500 benchmarks. This paper presents and analyses the results of SyGuS-Comp'17.
1711.11438
http://arxiv.org/abs/1711.11438v1
http://arxiv.org/pdf/1711.11438v1.pdf
[]
[]
[]
52bZPrRFsI
https://paperswithcode.com/paper/using-semantic-web-services-for-ai-based
Using Semantic Web Services for AI-Based Research in Industry 4.0
The transition to Industry 4.0 requires smart manufacturing systems that are easily configurable and provide a high level of flexibility during manufacturing in order to achieve mass customization or to support cloud manufacturing. To realize this, Cyber-Physical Systems (CPSs) combined with Artificial Intelligence (AI) methods find their way into manufacturing shop floors. For using AI methods in the context of Industry 4.0, semantic web services are indispensable to provide a reasonable abstraction of the underlying manufacturing capabilities. In this paper, we present semantic web services for AI-based research in Industry 4.0. Therefore, we developed more than 300 semantic web services for a physical simulation factory based on Web Ontology Language for Web Services (OWL-S) and Web Service Modeling Ontology (WSMO) and linked them to an already existing domain ontology for intelligent manufacturing control. Suitable for the requirements of CPS environments, our pre- and postconditions are verified in near real-time by invoking other semantic web services in contrast to complex reasoning within the knowledge base. Finally, we evaluate our implementation by executing a cyber-physical workflow composed of semantic web services using a workflow management system.
2007.03580
https://arxiv.org/abs/2007.03580v1
https://arxiv.org/pdf/2007.03580v1.pdf
[]
[]
[]
voGphl1I9e
https://paperswithcode.com/paper/resource-planning-for-rescue-operations
Resource Planning For Rescue Operations
After an earthquake, disaster sites pose a multitude of health and safety concerns. A rescue operation of people trapped in the ruins after an earthquake disaster requires a series of intelligent behavior, including planning. For a successful rescue operation, given a limited number of available actions and regulations, the role of planning in rescue operations is crucial. Fortunately, recent developments in automated planning by artificial intelligence community can help different organization in this crucial task. Due to the number of rules and regulations, we believe that a rule based system for planning can be helpful for this specific planning problem. In this research work, we use logic rules to represent rescue and related regular regulations, together with a logic based planner to solve this complicated problem. Although this research is still in the prototyping and modeling stage, it clearly shows that rule based languages can be a good infrastructure for this computational task. The results of this research can be used by different organizations, such as Iranian Red Crescent Society and International Institute of Seismology and Earthquake Engineering (IISEE).
1607.03979
http://arxiv.org/abs/1607.03979v1
http://arxiv.org/pdf/1607.03979v1.pdf
[]
[]
[]
j3L_k1jDLS
https://paperswithcode.com/paper/deep-convolutional-neural-networks-with-merge
Deep Convolutional Neural Networks with Merge-and-Run Mappings
A deep residual network, built by stacking a sequence of residual blocks, is easy to train, because identity mappings skip residual branches and thus improve information flow. To further reduce the training difficulty, we present a simple network architecture, deep merge-and-run neural networks. The novelty lies in a modularized building block, merge-and-run block, which assembles residual branches in parallel through a merge-and-run mapping: Average the inputs of these residual branches (Merge), and add the average to the output of each residual branch as the input of the subsequent residual branch (Run), respectively. We show that the merge-and-run mapping is a linear idempotent function in which the transformation matrix is idempotent, and thus improves information flow, making training easy. In comparison to residual networks, our networks enjoy compelling advantages: they contain much shorter paths, and the width, i.e., the number of channels, is increased. We evaluate the performance on the standard recognition tasks. Our approach demonstrates consistent improvements over ResNets with the comparable setup, and achieves competitive results (e.g., $3.57\%$ testing error on CIFAR-$10$, $19.00\%$ on CIFAR-$100$, $1.51\%$ on SVHN).
1611.07718
http://arxiv.org/abs/1611.07718v2
http://arxiv.org/pdf/1611.07718v2.pdf
[]
[]
[]
jr5dBQiiDK
https://paperswithcode.com/paper/an-experimental-study-on-implicit-social
An experimental study on implicit social recommendation
Social recommendation problems have drawn a lot of attention recently due to the prevalence of social networking sites. The experiments in previous literature suggest that social information is very effective in improving traditional recommendation algorithms. However, explicit social information is not always available in most of the recommender systems, which limits the impact of social recommendation techniques. In this paper, we study the following two research problems: (1) In some systems without explicit social information, can we still improve recommender systems using implicit social information? (2) In the systems with explicit social information, can the performance of using implicit social information outperform that of using explicit social information? In order to answer these two questions, we conduct comprehensive experimental analysis on three recommendation datasets. The result indicates that: (1) Implicit user and item social information, including similar and dissimilar relationships, can be employed to improve traditional recommendation methods. (2) When comparing implicit social information with explicit social information, the performance of using implicit information is slightly worse. This study provides additional insights to social recommendation techniques, and also greatly widens the utility and spreads the impact of previous and upcoming social recommendation approaches.
null
https://wing.comp.nus.edu.sg/~wing.nus/sig/papers_ir/p73.pdf
https://wing.comp.nus.edu.sg/~wing.nus/sig/papers_ir/p73.pdf
[ "Recommendation Systems" ]
[]
[]
RQgt-c-QGI
https://paperswithcode.com/paper/nonparametric-sparse-hierarchical-models
Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images
We propose a novel hierarchical, nonlinear model that predicts brain activity in area V1 evoked by natural images. In the study reported here brain activity was measured by means of functional magnetic resonance imaging (fMRI), a noninvasive technique that provides an indirect measure of neural activity pooled over a small volume (~ 2mm cube) of brain tissue. Our model, which we call the SpAM V1 model, is based on the reasonable assumption that fMRI measurements reflect the (possibly nonlinearly) pooled, rectified output of a large population of simple and complex cells in V1. It has a hierarchical filtering stage that consists of three layers: model simple cells, model complex cells, and a third layer in which the complex cells are linearly pooled (called “pooled-complex” cells). The pooling stage then obtains the measured fMRI signals as a sparse additive model (SpAM) in which a sparse nonparametric (nonlinear) combination of model complex cell and model pooled-complex cell outputs are summed. Our results show that the SpAM V1 model predicts fMRI responses evoked by natural images better than a benchmark model that only provides linear pooling of model complex cells. Furthermore, the spatial receptive fields, frequency tuning and orientation tuning curves of the SpAM V1 model estimated for each voxel appears to be consistent with the known properties of V1, and with previous analyses of this data set. A visualization procedure applied to the SpAM V1 model shows that most of the nonlinear pooling consists of simple compressive or saturating nonlinearities.
null
http://papers.nips.cc/paper/3481-nonparametric-sparse-hierarchical-models-describe-v1-fmri-responses-to-natural-images
http://papers.nips.cc/paper/3481-nonparametric-sparse-hierarchical-models-describe-v1-fmri-responses-to-natural-images.pdf
[]
[]
[]
OQWhn_ChJk
https://paperswithcode.com/paper/representation-and-interchange-of-linguistic
Representation and Interchange of Linguistic Annotation. An In-Depth, Side-by-Side Comparison of Three Designs
For decades, most self-respecting linguistic engineering initiatives have designed and implemented custom representations for various layers of, for example, morphological, syntactic, and semantic analysis. Despite occasional efforts at harmonization or even standardization, our field today is blessed with a multitude of ways of encoding and exchanging linguistic annotations of these types, both at the levels of {`}abstract syntax{'}, naming choices, and of course file formats. To a large degree, it is possible to work within and across design plurality by conversion, and often there may be good reasons for divergent design reflecting differences in use. However, it is likely that some abstract commonalities across choices of representation are obscured by more superficial differences, and conversely there is no obvious procedure to tease apart what actually constitute contentful vs. mere technical divergences. In this study, we seek to conceptually align three representations for common types of morpho-syntactic analysis, pinpoint what in our view constitute contentful differences, and reflect on the underlying principles and specific requirements that led to individual choices. We expect that a more in-depth understanding of these choices across designs may led to increased harmonization, or at least to more informed design of future representations.
null
https://www.aclweb.org/anthology/W17-0808/
https://www.aclweb.org/anthology/W17-0808
[]
[]
[]
czaZYXZYwh
https://paperswithcode.com/paper/the-boosted-dc-algorithm-for-nonsmooth
The Boosted DC Algorithm for nonsmooth functions
The Boosted Difference of Convex functions Algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical Difference of Convex functions Algorithm (DCA) thanks to an additional line search step. The purpose of this paper is twofold. Firstly, to show that this scheme can be generalized and successfully applied to certain types of nonsmooth DC functions, namely, those that can be expressed as the difference of a smooth function and a possibly nonsmooth one. Secondly, to show that there is complete freedom in the choice of the trial step size for the line search, which is something that can further improve its performance. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration, and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergent rate of the iterations are obtained under the Kurdyka-Lojasiewicz property. Applications and numerical experiments for two problems in data science are presented, demonstrating that BDCA outperforms DCA. Specifically, for the Minimum Sum-of-Squares Clustering problem, BDCA was on average sixteen times faster than DCA, and for the Multidimensional Scaling problem, BDCA was three times faster than DCA.
1812.06070
https://arxiv.org/abs/1812.06070v2
https://arxiv.org/pdf/1812.06070v2.pdf
[]
[ "LINE" ]
[]
_8rjgNnA7I
https://paperswithcode.com/paper/the-d-ans-corpus-the-dublin-autonomous
The D-ANS corpus: the Dublin-Autonomous Nervous System corpus of biosignal and multimodal recordings of conversational speech
Biosignals, such as electrodermal activity (EDA) and heart rate, are increasingly being considered as potential data sources to provide information about the temporal fluctuations in affective experience during human interaction. This paper describes an English-speaking, multiple session corpus of small groups of people engaged in informal, unscripted conversation while wearing wireless, wrist-based EDA sensors. Additionally, one participant per recording session wore a heart rate monitor. This corpus was collected in order to observe potential interactions between various social and communicative phenomena and the temporal dynamics of the recorded biosignals. Here we describe the communicative context, technical set-up, synchronization process, and challenges in collecting and utilizing such data. We describe the segmentation and annotations to date, including laughter annotations, and how the research community can access and collaborate on this corpus now and in the future. We believe this corpus is particularly relevant to researchers interested in unscripted social conversation as well as to researchers with a specific interest in observing the dynamics of biosignals during informal social conversation rich with examples of laughter, conversational turn-taking, and non-task-based interaction.
null
https://www.aclweb.org/anthology/L14-1322/
http://www.lrec-conf.org/proceedings/lrec2014/pdf/374_Paper.pdf
[]
[]
[]
3qotoBW_vb
https://paperswithcode.com/paper/on-tighter-generalization-bound-for-deep
On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond
We establish a margin based data dependent generalization error bound for a general family of deep neural networks in terms of the depth and width, as well as the Jacobian of the networks. Through introducing a new characterization of the Lipschitz properties of neural network family, we achieve significantly tighter generalization bounds than existing results. Moreover, we show that the generalization bound can be further improved for bounded losses. Aside from the general feedforward deep neural networks, our results can be applied to derive new bounds for popular architectures, including convolutional neural networks (CNNs) and residual networks (ResNets). When achieving same generalization errors with previous arts, our bounds allow for the choice of larger parameter spaces of weight matrices, inducing potentially stronger expressive ability for neural networks. Numerical evaluation is also provided to support our theory.
1806.05159
https://arxiv.org/abs/1806.05159v4
https://arxiv.org/pdf/1806.05159v4.pdf
[]
[]
[]
2nBfdJ2xd0
https://paperswithcode.com/paper/avoiding-undesired-choices-using-intelligent
Avoiding Undesired Choices Using Intelligent Adaptive Systems
We propose a number of heuristics that can be used for identifying when intransitive choice behaviour is likely to occur in choice situations. We also suggest two methods for avoiding undesired choice behaviour, namely transparent communication and adaptive choice-set generation. We believe that these two ways can contribute to the avoidance of decision biases in choice situations that may often be regretted.
1404.3659
http://arxiv.org/abs/1404.3659v1
http://arxiv.org/pdf/1404.3659v1.pdf
[]
[]
[]
g-EGPltD3j
https://paperswithcode.com/paper/proactive-intention-recognition-for-joint
Proactive Intention Recognition for Joint Human-Robot Search and Rescue Missions through Monte-Carlo Planning in POMDP Environments
Proactively perceiving others' intentions is a crucial skill to effectively interact in unstructured, dynamic and novel environments. This work proposes a first step towards embedding this skill in support robots for search and rescue missions. Predicting the responders' intentions, indeed, will enable exploration approaches which will identify and prioritise areas that are more relevant for the responder and, thus, for the task, leading to the development of safer, more robust and efficient joint exploration strategies. More specifically, this paper presents an active intention recognition paradigm to perceive, even under sensory constraints, not only the target's position but also the first responder's movements, which can provide information on his/her intentions (e.g. reaching the position where he/she expects the target to be). This mechanism is implemented by employing an extension of Monte-Carlo-based planning techniques for partially observable environments, where the reward function is augmented with an entropy reduction bonus. We test in simulation several configurations of reward augmentation, both information theoretic and not, as well as belief state approximations and obtain substantial improvements over the basic approach.
1908.10125
https://arxiv.org/abs/1908.10125v1
https://arxiv.org/pdf/1908.10125v1.pdf
[ "Intent Detection" ]
[]
[]
5X0E8DebKs
https://paperswithcode.com/paper/high-dimensional-multivariate-forecasting
High-Dimensional Multivariate Forecasting with Low-Rank Gaussian Copula Processes
Predicting the dependencies between observations from multiple time series is critical for applications such as anomaly detection, financial risk management, causal analysis, or demand forecasting. However, the computational and numerical difficulties of estimating time-varying and high-dimensional covariance matrices often limits existing methods to handling at most a few hundred dimensions or requires making strong assumptions on the dependence between series. We propose to combine an RNN-based time series model with a Gaussian copula process output model with a low-rank covariance structure to reduce the computational complexity and handle non-Gaussian marginal distributions. This permits to drastically reduce the number of parameters and consequently allows the modeling of time-varying correlations of thousands of time series. We show on several real-world datasets that our method provides significant accuracy improvements over state-of-the-art baselines and perform an ablation study analyzing the contributions of the different components of our model.
1910.03002
https://arxiv.org/abs/1910.03002v2
https://arxiv.org/pdf/1910.03002v2.pdf
[ "Anomaly Detection", "Time Series" ]
[]
[]
bNhsiwavuN
https://paperswithcode.com/paper/local-algorithms-for-interactive-clustering
Local algorithms for interactive clustering
We study the design of interactive clustering algorithms for data sets satisfying natural stability assumptions. Our algorithms start with any initial clustering and only make local changes in each step; both are desirable features in many applications. We show that in this constrained setting one can still design provably efficient algorithms that produce accurate clusterings. We also show that our algorithms perform well on real-world data.
1312.6724
http://arxiv.org/abs/1312.6724v3
http://arxiv.org/pdf/1312.6724v3.pdf
[]
[]
[]
CtZVVzfXJs
https://paperswithcode.com/paper/a-bag-of-visual-words-approach-for-symbols
A Bag of Visual Words Approach for Symbols-Based Coarse-Grained Ancient Coin Classification
The field of Numismatics provides the names and descriptions of the symbols minted on the ancient coins. Classification of the ancient coins aims at assigning a given coin to its issuer. Various issuers used various symbols for their coins. We propose to use these symbols for a framework that will coarsely classify the ancient coins. Bag of visual words (BoVWs) is a well established visual recognition technique applied to various problems in computer vision like object and scene recognition. Improvements have been made by incorporating the spatial information to this technique. We apply the BoVWs technique to our problem and use three symbols for coarse-grained classification. We use rectangular tiling, log-polar tiling and circular tiling to incorporate spatial information to BoVWs. Experimental results show that the circular tiling proves superior to the rest of the methods for our problem.
1304.6192
http://arxiv.org/abs/1304.6192v1
http://arxiv.org/pdf/1304.6192v1.pdf
[ "Scene Recognition" ]
[]
[]
BGX9kBCG3U
https://paperswithcode.com/paper/classification-of-diabetic-retinopathy-via
Classification of Diabetic Retinopathy via Fundus Photography: Utilization of Deep Learning Approaches to Speed up Disease Detection
In this paper, we propose two distinct solutions to the problem of Diabetic Retinopathy (DR) classification. In the first approach, we introduce a shallow neural network architecture. This model performs well on classification of the most frequent classes while fails at classifying the less frequent ones. In the second approach, we use transfer learning to re-train the last modified layer of a very deep neural network to improve the generalization ability of the model to the less frequent classes. Our results demonstrate superior abilities of transfer learning in DR classification of less frequent classes compared to the shallow neural network.
2007.09478
https://arxiv.org/abs/2007.09478v1
https://arxiv.org/pdf/2007.09478v1.pdf
[ "Transfer Learning" ]
[]
[]
BgXOUem8lY
https://paperswithcode.com/paper/dynamic-models-applied-to-value-learning-in
Dynamic Models Applied to Value Learning in Artificial Intelligence
Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance is not made prudently and critically-reflexively, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted behaviors of intelligent agents and systems, and at the same time specifying what we want such systems to do, especially when we look for the possibility of intelligent agents acting in several domains over the long term. It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that we cannot expect an AI to develop human moral values simply because of its intelligence, as discussed in the Orthogonality Thesis. Perhaps this difficulty comes from the way we are addressing the problem of expressing objectives, values, and ends, using representational cognitive methods. A solution to this problem would be the dynamic approach proposed by Dreyfus, whose phenomenological philosophy shows that the human experience of being-in-the-world in several aspects is not well represented by the symbolic or connectionist cognitive method, especially in regards to the question of learning values. A possible approach to this problem would be to use theoretical models such as SED (situated embodied dynamics) to address the values learning problem in AI.
2005.05538
https://arxiv.org/abs/2005.05538v3
https://arxiv.org/pdf/2005.05538v3.pdf
[]
[]
[]
e6_tUzSbp4
https://paperswithcode.com/paper/convolutional-recurrent-neural-networks-for-2
Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting
Keyword spotting (KWS) constitutes a major component of human-technology interfaces. Maximizing the detection accuracy at a low false alarm (FA) rate, while minimizing the footprint size, latency and complexity are the goals for KWS. Towards achieving them, we study Convolutional Recurrent Neural Networks (CRNNs). Inspired by large-scale state-of-the-art speech recognition systems, we combine the strengths of convolutional layers and recurrent layers to exploit local structure and long-range context. We analyze the effect of architecture parameters, and propose training strategies to improve performance. With only ~230k parameters, our CRNN model yields acceptably low latency, and achieves 97.71% accuracy at 0.5 FA/hour for 5 dB signal-to-noise ratio.
1703.05390
http://arxiv.org/abs/1703.05390v3
http://arxiv.org/pdf/1703.05390v3.pdf
[ "Keyword Spotting", "Small-Footprint Keyword Spotting", "Speech Recognition" ]
[]
[]
tFXyw-OuS6
https://paperswithcode.com/paper/accelerated-inference-for-latent-variable
Accelerated Parallel Non-conjugate Sampling for Bayesian Non-parametric Models
Inference of latent feature models in the Bayesian nonparametric setting is generally difficult, especially in high dimensional settings, because it usually requires proposing features from some prior distribution. In special cases, where the integration is tractable, we could sample new feature assignments according to a predictive likelihood. However, this still may not be efficient in high dimensions. We present a novel method to accelerate the mixing of latent variable model inference by proposing feature locations from the data, as opposed to the prior. First, we introduce our accelerated feature proposal mechanism that we will show is a valid Bayesian inference algorithm and next we propose an approximate inference strategy to perform accelerated inference in parallel. This sampling method is efficient for proper mixing of the Markov chain Monte Carlo sampler, computationally attractive, and is theoretically guaranteed to converge to the posterior distribution as its limiting distribution.
1705.07178
https://arxiv.org/abs/1705.07178v4
https://arxiv.org/pdf/1705.07178v4.pdf
[ "Bayesian Inference" ]
[]
[]
rVNkw7fy28
https://paperswithcode.com/paper/matching-pursuit-lasso-part-ii-applications
Matching Pursuit LASSO Part II: Applications and Sparse Recovery over Batch Signals
Matching Pursuit LASSIn Part I \cite{TanPMLPart1}, a Matching Pursuit LASSO ({MPL}) algorithm has been presented for solving large-scale sparse recovery (SR) problems. In this paper, we present a subspace search to further improve the performance of MPL, and then continue to address another major challenge of SR -- batch SR with many signals, a consideration which is absent from most of previous $\ell_1$-norm methods. As a result, a batch-mode {MPL} is developed to vastly speed up sparse recovery of many signals simultaneously. Comprehensive numerical experiments on compressive sensing and face recognition tasks demonstrate the superior performance of MPL and BMPL over other methods considered in this paper, in terms of sparse recovery ability and efficiency. In particular, BMPL is up to 400 times faster than existing $\ell_1$-norm methods considered to be state-of-the-art.O Part II: Applications and Sparse Recovery over Batch Signals
1302.5010
http://arxiv.org/abs/1302.5010v2
http://arxiv.org/pdf/1302.5010v2.pdf
[ "Compressive Sensing", "Face Recognition" ]
[]
[]
49RNCrp1KA
https://paperswithcode.com/paper/the-distribution-family-of-similarity
The Distribution Family of Similarity Distances
Assessing similarity between features is a key step in object recognition and scene categorization tasks. We argue that knowledge on the distribution of distances generated by similarity functions is crucial in deciding whether features are similar or not. Intuitively one would expect that similarities between features could arise from any distribution. In this paper, we will derive the contrary, and report the theoretical result that $L_p$-norms --a class of commonly applied distance metrics-- from one feature vector to other vectors are Weibull-distributed if the feature values are correlated and non-identically distributed. Besides these assumptions being realistic for images, we experimentally show them to hold for various popular feature extraction algorithms, for a diverse range of images. This fundamental insight opens new directions in the assessment of feature similarity, with projected improvements in object and scene recognition algorithms. Erratum: The authors of paper have declared that they have become convinced that the reasoning in the reference is too simple as a proof of their claims. As a consequence, they withdraw their theorems.
null
http://papers.nips.cc/paper/3367-the-distribution-family-of-similarity-distances
http://papers.nips.cc/paper/3367-the-distribution-family-of-similarity-distances.pdf
[ "Object Recognition", "Scene Recognition" ]
[]
[]
RLfk9OdxY3
https://paperswithcode.com/paper/projectron-a-shallow-and-interpretable
Projectron -- A Shallow and Interpretable Network for Classifying Medical Images
This paper introduces the `Projectron' as a new neural network architecture that uses Radon projections to both classify and represent medical images. The motivation is to build shallow networks which are more interpretable in the medical imaging domain. Radon transform is an established technique that can reconstruct images from parallel projections. The Projectron first applies global Radon transform to each image using equidistant angles and then feeds these transformations for encoding to a single layer of neurons followed by a layer of suitable kernels to facilitate a linear separation of projections. Finally, the Projectron provides the output of the encoding as an input to two more layers for final classification. We validate the Projectron on five publicly available datasets, a general dataset (namely MNIST) and four medical datasets (namely Emphysema, IDC, IRMA, and Pneumonia). The results are encouraging as we compared the Projectron's performance against MLPs with raw images and Radon projections as inputs, respectively. Experiments clearly demonstrate the potential of the proposed Projectron for representing/classifying medical images.
1904.00740
http://arxiv.org/abs/1904.00740v1
http://arxiv.org/pdf/1904.00740v1.pdf
[]
[]
[]
pvAWHtmskU
https://paperswithcode.com/paper/cross-modal-health-state-estimation
Cross-Modal Health State Estimation
Individuals create and consume more diverse data about themselves today than any time in history. Sources of this data include wearable devices, images, social media, geospatial information and more. A tremendous opportunity rests within cross-modal data analysis that leverages existing domain knowledge methods to understand and guide human health. Especially in chronic diseases, current medical practice uses a combination of sparse hospital based biological metrics (blood tests, expensive imaging, etc.) to understand the evolving health status of an individual. Future health systems must integrate data created at the individual level to better understand health status perpetually, especially in a cybernetic framework. In this work we fuse multiple user created and open source data streams along with established biomedical domain knowledge to give two types of quantitative state estimates of cardiovascular health. First, we use wearable devices to calculate cardiorespiratory fitness (CRF), a known quantitative leading predictor of heart disease which is not routinely collected in clinical settings. Second, we estimate inherent genetic traits, living environmental risks, circadian rhythm, and biological metrics from a diverse dataset. Our experimental results on 24 subjects demonstrate how multi-modal data can provide personalized health insight. Understanding the dynamic nature of health status will pave the way for better health based recommendation engines, better clinical decision making and positive lifestyle changes.
1808.06462
http://arxiv.org/abs/1808.06462v2
http://arxiv.org/pdf/1808.06462v2.pdf
[ "Decision Making" ]
[]
[]
dl0REi2Hpr
https://paperswithcode.com/paper/ctap-a-web-based-tool-supporting-automatic
CTAP: A Web-Based Tool Supporting Automatic Complexity Analysis
Informed by research on readability and language acquisition, computational linguists have developed sophisticated tools for the analysis of linguistic complexity. While some tools are starting to become accessible on the web, there still is a disconnect between the features that can in principle be identified based on state-of-the-art computational linguistic analysis, and the analyses a second language acquisition researcher, teacher, or textbook writer can readily obtain and visualize for their own collection of texts. This short paper presents a web-based tool development that aims to meet this challenge. The Common Text Analysis Platform (CTAP) is designed to support fully configurable linguistic feature extraction for a wide range of complexity analyses. It features a user-friendly interface, modularized and reusable analysis component integration, and flexible corpus and feature management. Building on the Unstructured Information Management framework (UIMA), CTAP readily supports integration of state-of-the-art NLP and complexity feature extraction maintaining modularization and reusability. CTAP thereby aims at providing a common platform for complexity analysis, encouraging research collaboration and sharing of feature extraction components{---}to jointly advance the state-of-the-art in complexity analysis in a form that readily supports real-life use by ordinary users.
null
https://www.aclweb.org/anthology/W16-4113/
https://www.aclweb.org/anthology/W16-4113
[ "Language Acquisition" ]
[]
[]
nQKFg0ND34
https://paperswithcode.com/paper/plasticity-enhanced-domain-wall-mtj-neural
Plasticity-Enhanced Domain-Wall MTJ Neural Networks for Energy-Efficient Online Learning
Machine learning implements backpropagation via abundant training samples. We demonstrate a multi-stage learning system realized by a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ). The system consists of unsupervised (clustering) as well as supervised sub-systems, and generalizes quickly (with few samples). We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience-inspired plasticity learning rules, and highlight performance on a suite of tasks. Our energy analysis confirms the value of the approach, as the learning budget stays below 20 $\mu J$ even for large tasks used typically in machine learning.
2003.02357
https://arxiv.org/abs/2003.02357v1
https://arxiv.org/pdf/2003.02357v1.pdf
[]
[]
[]
CK25pituoU
https://paperswithcode.com/paper/mirror-surface-reconstruction-under-an
Mirror Surface Reconstruction Under an Uncalibrated Camera
This paper addresses the problem of mirror surface reconstruction, and a solution based on observing the reflections of a moving reference plane on the mirror surface is proposed. Unlike previous approaches which require tedious work to calibrate the camera, our method can recover both the camera intrinsics and extrinsics together with the mirror surface from reflections of the reference plane under at least three unknown distinct poses. Our previous work has demonstrated that 3D poses of the reference plane can be registered in a common coordinate system using reflection correspondences established across images. This leads to a bunch of registered 3D lines formed from the reflection correspondences. Given these lines, we first derive an analytical solution to recover the camera projection matrix through estimating the line projection matrix. We then optimize the camera projection matrix by minimizing reprojection errors computed based on a cross-ratio formulation. The mirror surface is finally reconstructed based on the optimized cross-ratio constraint. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our method.
null
http://openaccess.thecvf.com/content_cvpr_2016/html/Han_Mirror_Surface_Reconstruction_CVPR_2016_paper.html
http://openaccess.thecvf.com/content_cvpr_2016/papers/Han_Mirror_Surface_Reconstruction_CVPR_2016_paper.pdf
[]
[ "LINE" ]
[]
eAvmEeylF_
https://paperswithcode.com/paper/write-a-classifier-predicting-visual
Write a Classifier: Predicting Visual Classifiers from Unstructured Text
People typically learn through exposure to visual concepts associated with linguistic descriptions. For instance, teaching visual object categories to children is often accompanied by descriptions in text or speech. In a machine learning context, these observations motivates us to ask whether this learning process could be computationally modeled to learn visual classifiers. More specifically, the main question of this work is how to utilize purely textual description of visual classes with no training images, to learn explicit visual classifiers for them. We propose and investigate two baseline formulations, based on regression and domain transfer, that predict a linear classifier. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the parameters of a linear classifier. We also propose a generic kernelized models where a kernel classifier is predicted in the form defined by the representer theorem. The kernelized models allow defining and utilizing any two RKHS (Reproducing Kernel Hilbert Space) kernel functions in the visual space and text space, respectively. We finally propose a kernel function between unstructured text descriptions that builds on distributional semantics, which shows an advantage in our setting and could be useful for other applications. We applied all the studied models to predict visual classifiers on two fine-grained and challenging categorization datasets (CU Birds and Flower Datasets), and the results indicate successful predictions of our final model over several baselines that we designed.
1601.00025
http://arxiv.org/abs/1601.00025v2
http://arxiv.org/pdf/1601.00025v2.pdf
[ "Transfer Learning" ]
[]
[]
EZO34JeWW0
https://paperswithcode.com/paper/higher-order-projected-power-iterations-for
Higher-order Projected Power Iterations for Scalable Multi-Matching
The matching of multiple objects (e.g. shapes or images) is a fundamental problem in vision and graphics. In order to robustly handle ambiguities, noise and repetitive patterns in challenging real-world settings, it is essential to take geometric consistency between points into account. Computationally, the multi-matching problem is difficult. It can be phrased as simultaneously solving multiple (NP-hard) quadratic assignment problems (QAPs) that are coupled via cycle-consistency constraints. The main limitations of existing multi-matching methods are that they either ignore geometric consistency and thus have limited robustness, or they are restricted to small-scale problems due to their (relatively) high computational cost. We address these shortcomings by introducing a Higher-order Projected Power Iteration method, which is (i) efficient and scales to tens of thousands of points, (ii) straightforward to implement, (iii) able to incorporate geometric consistency, (iv) guarantees cycle-consistent multi-matchings, and (iv) comes with theoretical convergence guarantees. Experimentally we show that our approach is superior to existing methods.
1811.10541
http://arxiv.org/abs/1811.10541v2
http://arxiv.org/pdf/1811.10541v2.pdf
[]
[]
[]
iGMWDmaGLR
https://paperswithcode.com/paper/optimal-bipartite-network-clustering
Optimal Bipartite Network Clustering
We study bipartite community detection in networks, or more generally the network biclustering problem. We present a fast two-stage procedure based on spectral initialization followed by the application of a pseudo-likelihood classifier twice. Under mild regularity conditions, we establish the weak consistency of the procedure (i.e., the convergence of the misclassification rate to zero) under a general bipartite stochastic block model. We show that the procedure is optimal in the sense that it achieves the optimal convergence rate that is achievable by a biclustering oracle, adaptively over the whole class, up to constants. This is further formalized by deriving a minimax lower bound over a class of biclustering problems. The optimal rate we obtain sharpens some of the existing results and generalizes others to a wide regime of average degree growth, from sparse networks with average degrees growing arbitrarily slowly to fairly dense networks with average degrees of order $\sqrt{n}$. As a special case, we recover the known exact recovery threshold in the $\log n$ regime of sparsity. To obtain the consistency result, as part of the provable version of the algorithm, we introduce a sub-block partitioning scheme that is also computationally attractive, allowing for distributed implementation of the algorithm without sacrificing optimality. The provable algorithm is derived from a general class of pseudo-likelihood biclustering algorithms that employ simple EM type updates. We show the effectiveness of this general class by numerical simulations.
1803.06031
http://arxiv.org/abs/1803.06031v2
http://arxiv.org/pdf/1803.06031v2.pdf
[ "Community Detection" ]
[]
[]
lZWcIK9FE9
https://paperswithcode.com/paper/a-robust-visual-system-for-small-target
A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds
Monitoring small objects against cluttered moving backgrounds is a huge challenge to future robotic vision systems. As a source of inspiration, insects are quite apt at searching for mates and tracking prey -- which always appear as small dim speckles in the visual field. The exquisite sensitivity of insects for small target motion, as revealed recently, is coming from a class of specific neurons called small target motion detectors (STMDs). Although a few STMD-based models have been proposed, these existing models only use motion information for small target detection and cannot discriminate small targets from small-target-like background features (named as fake features). To address this problem, this paper proposes a novel visual system model (STMD+) for small target motion detection, which is composed of four subsystems -- ommatidia, motion pathway, contrast pathway and mushroom body. Compared to existing STMD-based models, the additional contrast pathway extracts directional contrast from luminance signals to eliminate false positive background motion. The directional contrast and the extracted motion information by the motion pathway are integrated in the mushroom body for small target discrimination. Extensive experiments showed the significant and consistent improvements of the proposed visual system model over existing STMD-based models against fake features.
1904.04363
http://arxiv.org/abs/1904.04363v1
http://arxiv.org/pdf/1904.04363v1.pdf
[ "Motion Detection" ]
[]
[]
38Od8ZMYiI
https://paperswithcode.com/paper/capturing-the-diversity-of-biological-tuning
Capturing the diversity of biological tuning curves using generative adversarial networks
Tuning curves characterizing the response selectivities of biological neurons often exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or random connectivity also give rise to diverse tuning curves. However, a general framework for fitting such models to experimentally measured tuning curves is lacking. We address this problem by proposing to view mechanistic network models as generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable to compute. Recent advances in machine learning provide ways for fitting generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GAN) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic models in theoretical neuroscience to datasets of measured tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, e.g. the biophysical parameters of individual cells or the particular realization of the full synaptic connectivity matrix, and directly learns model parameters which characterize the statistics of connectivity or of single-cell properties. Another strength of this approach is that it fits the entire, joint distribution of experimental tuning curves, instead of matching a few summary statistics picked a priori by the user. More generally, this framework opens the door to fitting theoretically motivated dynamical network models directly to simultaneously or non-simultaneously recorded neural responses.
1707.04582
http://arxiv.org/abs/1707.04582v3
http://arxiv.org/pdf/1707.04582v3.pdf
[]
[]
[]
pWpFZEmHIg
https://paperswithcode.com/paper/robustness-of-sentence-length-measures-in
Robustness of sentence length measures in written texts
Hidden structural patterns in written texts have been subject of considerable research in the last decades. In particular, mapping a text into a time series of sentence lengths is a natural way to investigate text structure. Typically, sentence length has been quantified by using measures based on the number of words and the number of characters, but other variations are possible. To quantify the robustness of different sentence length measures, we analyzed a database containing about five hundred books in English. For each book, we extracted six distinct measures of sentence length, including number of words and number of characters (taking into account lemmatization and stop words removal). We compared these six measures for each book by using i) Pearson's coefficient to investigate linear correlations; ii) Kolmogorov--Smirnov test to compare distributions; and iii) detrended fluctuation analysis (DFA) to quantify auto-correlations. We have found that all six measures exhibit very similar behavior, suggesting that sentence length is a robust measure related to text structure.
1805.01460
http://arxiv.org/abs/1805.01460v1
http://arxiv.org/pdf/1805.01460v1.pdf
[ "Lemmatization", "Time Series" ]
[]
[]
6rqCRQPqTk
https://paperswithcode.com/paper/a-study-of-compositional-generalization-in
A Study of Compositional Generalization in Neural Models
Compositional and relational learning is a hallmark of human intelligence, but one which presents challenges for neural models. One difficulty in the development of such models is the lack of benchmarks with clear compositional and relational task structure on which to systematically evaluate them. In this paper, we introduce an environment called ConceptWorld, which enables the generation of images from compositional and relational concepts, defined using a logical domain specific language. We use it to generate images for a variety of compositional structures: 2x2 squares, pentominoes, sequences, scenes involving these objects, and other more complex concepts. We perform experiments to test the ability of standard neural architectures to generalize on relations with compositional arguments as the compositional depth of those arguments increases and under substitution. We compare standard neural networks such as MLP, CNN and ResNet, as well as state-of-the-art relational networks including WReN and PrediNet in a multi-class image classification setting. For simple problems, all models generalize well to close concepts but struggle with longer compositional chains. For more complex tests involving substitutivity, all models struggle, even with short chains. In highlighting these difficulties and providing an environment for further experimentation, we hope to encourage the development of models which are able to generalize effectively in compositional, relational domains.
2006.09437
https://arxiv.org/abs/2006.09437v2
https://arxiv.org/pdf/2006.09437v2.pdf
[ "Image Classification", "Relational Reasoning" ]
[ "1x1 Convolution", "ReLU", "Bottleneck Residual Block", "Batch Normalization", "Average Pooling", "Max Pooling", "Global Average Pooling", "Residual Connection", "Kaiming Initialization", "Convolution", "Residual Block", "ResNet" ]
[]
DAUd-HbzP1
https://paperswithcode.com/paper/learning-like-humans-with-deep-symbolic
Learning like humans with Deep Symbolic Networks
We introduce the Deep Symbolic Network (DSN) model, which aims at becoming the white-box version of Deep Neural Networks (DNN). The DSN model provides a simple, universal yet powerful structure, similar to DNN, to represent any knowledge of the world, which is transparent to humans. The conjecture behind the DSN model is that any type of real world objects sharing enough common features are mapped into human brains as a symbol. Those symbols are connected by links, representing the composition, correlation, causality, or other relationships between them, forming a deep, hierarchical symbolic network structure. Powered by such a structure, the DSN model is expected to learn like humans, because of its unique characteristics. First, it is universal, using the same structure to store any knowledge. Second, it can learn symbols from the world and construct the deep symbolic networks automatically, by utilizing the fact that real world objects have been naturally separated by singularities. Third, it is symbolic, with the capacity of performing causal deduction and generalization. Fourth, the symbols and the links between them are transparent to us, and thus we will know what it has learned or not - which is the key for the security of an AI system. Fifth, its transparency enables it to learn with relatively small data. Sixth, its knowledge can be accumulated. Last but not least, it is more friendly to unsupervised learning than DNN. We present the details of the model, the algorithm powering its automatic learning ability, and describe its usefulness in different use cases. The purpose of this paper is to generate broad interest to develop it within an open source project centered on the Deep Symbolic Network (DSN) model towards the development of general AI.
1707.03377
http://arxiv.org/abs/1707.03377v2
http://arxiv.org/pdf/1707.03377v2.pdf
[ "Small Data Image Classification" ]
[]
[]
yMZ18WUzAv
https://paperswithcode.com/paper/topological-defects-and-confinement-with
Topological defects and confinement with machine learning: the case of monopoles in compact electrodynamics
We investigate the advantages of machine learning techniques to recognize the dynamics of topological objects in quantum field theories. We consider the compact U(1) gauge theory in three spacetime dimensions as the simplest example of a theory that exhibits confinement and mass gap phenomena generated by monopoles. We train a neural network with a generated set of monopole configurations to distinguish between confinement and deconfinement phases, from which it is possible to determine the deconfinement transition point and to predict several observables. The model uses a supervised learning approach and treats the monopole configurations as three-dimensional images (holograms). We show that the model can determine the transition temperature with accuracy, which depends on the criteria implemented in the algorithm. More importantly, we train the neural network with configurations from a single lattice size before making predictions for configurations from other lattice sizes, from which a reliable estimation of the critical temperatures are obtained.
2006.09113
https://arxiv.org/abs/2006.09113v1
https://arxiv.org/pdf/2006.09113v1.pdf
[]
[]
[]
G79vNnKuO3
https://paperswithcode.com/paper/machine-learning-and-big-scientific-data
Machine Learning and Big Scientific Data
This paper reviews some of the challenges posed by the huge growth of experimental data generated by the new generation of large-scale experiments at UK national facilities at the Rutherford Appleton Laboratory site at Harwell near Oxford. Such "Big Scientific Data" comes from the Diamond Light Source and Electron Microscopy Facilities, the ISIS Neutron and Muon Facility, and the UK's Central Laser Facility. Increasingly, scientists are now needing to use advanced machine learning and other AI technologies both to automate parts of the data pipeline and also to help find new scientific discoveries in the analysis of their data. For commercially important applications, such as object recognition, natural language processing and automatic translation, deep learning has made dramatic breakthroughs. Google's DeepMind has now also used deep learning technology to develop their AlphaFold tool to make predictions for protein folding. Remarkably, they have been able to achieve some spectacular results for this specific scientific problem. Can deep learning be similarly transformative for other scientific problems? After a brief review of some initial applications of machine learning at the Rutherford Appleton Laboratory, we focus on challenges and opportunities for AI in advancing materials science. Finally, we discuss the importance of developing some realistic machine learning benchmarks using Big Scientific Data coming from a number of different scientific domains. We conclude with some initial examples of our "SciML" benchmark suite and of the research challenges these benchmarks will enable.
1910.07631
https://arxiv.org/abs/1910.07631v1
https://arxiv.org/pdf/1910.07631v1.pdf
[ "Electron Microscopy", "Object Recognition" ]
[]
[]
hLlpBFykda
https://paperswithcode.com/paper/fleet-size-and-mix-split-delivery-vehicle
Fleet Size and Mix Split-Delivery Vehicle Routing
In the classic Vehicle Routing Problem (VRP) a fleet of of vehicles has to visit a set of customers while minimising the operations' costs. We study a rich variant of the VRP featuring split deliveries, an heterogeneous fleet, and vehicle-commodity incompatibility constraints. Our goal is twofold: define the cheapest routing and the most adequate fleet. To do so, we split the problem into two interdependent components: a fleet design component and a routing component. First, we define two Mixed Integer Programming (MIP) formulations for each component. Then we discuss several improvements in the form of valid cuts and symmetry breaking constraints. The main contribution of this paper is a comparison of the four resulting models for this Rich VRP. We highlight their strengths and weaknesses with extensive experiments. Finally, we explore a lightweight integration with Constraint Programming (CP). We use a fast CP model which gives good solutions and use the solution to warm-start our models.
1612.01691
http://arxiv.org/abs/1612.01691v1
http://arxiv.org/pdf/1612.01691v1.pdf
[]
[]
[]
X39c-oXt8O
https://paperswithcode.com/paper/real-time-plant-health-assessment-via
Real-time Plant Health Assessment Via Implementing Cloud-based Scalable Transfer Learning On AWS DeepLens
In the Agriculture sector, control of plant leaf diseases is crucial as it influences the quality and production of plant species with an impact on the economy of any country. Therefore, automated identification and classification of plant leaf disease at an early stage is essential to reduce economic loss and to conserve the specific species. Previously, to detect and classify plant leaf disease, various Machine Learning models have been proposed; however, they lack usability due to hardware incompatibility, limited scalability and inefficiency in practical usage. Our proposed DeepLens Classification and Detection Model (DCDM) approach deal with such limitations by introducing automated detection and classification of the leaf diseases in fruits (apple, grapes, peach and strawberry) and vegetables (potato and tomato) via scalable transfer learning on AWS SageMaker and importing it on AWS DeepLens for real-time practical usability. Cloud integration provides scalability and ubiquitous access to our approach. Our experiments on extensive image data set of healthy and unhealthy leaves of fruits and vegetables showed an accuracy of 98.78% with a real-time diagnosis of plant leaves diseases. We used forty thousand images for the training of deep learning model and then evaluated it on ten thousand images. The process of testing an image for disease diagnosis and classification using AWS DeepLens on average took 0.349s, providing disease information to the user in less than a second.
2009.04110
https://arxiv.org/abs/2009.04110v2
https://arxiv.org/pdf/2009.04110v2.pdf
[ "Transfer Learning" ]
[]
[]
2OoHmqu-bf
https://paperswithcode.com/paper/variational-inference-over-non-differentiable
Variational Inference over Non-differentiable Cardiac Simulators using Bayesian Optimization
Performing inference over simulators is generally intractable as their runtime means we cannot compute a marginal likelihood. We develop a likelihood-free inference method to infer parameters for a cardiac simulator, which replicates electrical flow through the heart to the body surface. We improve the fit of a state-of-the-art simulator to an electrocardiogram (ECG) recorded from a real patient.
1712.03353
http://arxiv.org/abs/1712.03353v1
http://arxiv.org/pdf/1712.03353v1.pdf
[ "Variational Inference" ]
[]
[]
WyeetBIX3H
https://paperswithcode.com/paper/pac-bayesian-auc-classification-and-scoring
PAC-Bayesian AUC classification and scoring
We develop a scoring and classification procedure based on the PAC-Bayesian approach and the AUC (Area Under Curve) criterion. We focus initially on the class of linear score functions. We derive PAC-Bayesian non-asymptotic bounds for two types of prior for the score parameters: a Gaussian prior, and a spike-and-slab prior; the latter makes it possible to perform feature selection. One important advantage of our approach is that it is amenable to powerful Bayesian computational tools. We derive in particular a Sequential Monte Carlo algorithm, as an efficient method which may be used as a gold standard, and an Expectation-Propagation algorithm, as a much faster but approximate method. We also extend our method to a class of non-linear score functions, essentially leading to a nonparametric procedure, by considering a Gaussian process prior.
1410.1771
http://arxiv.org/abs/1410.1771v2
http://arxiv.org/pdf/1410.1771v2.pdf
[ "Feature Selection" ]
[]
[]
SKhUaa7_DY
https://paperswithcode.com/paper/arabic-segmentation-combination-strategies
Arabic-Segmentation Combination Strategies for Statistical Machine Translation
Arabic segmentation was already applied successfully for the task of statistical machine translation (SMT). Yet, there is no consistent comparison of the effect of different techniques and methods over the final translation quality. In this work, we use existing tools and further re-implement and develop new methods for segmentation. We compare the resulting SMT systems based on the different segmentation methods over the small IWSLT 2010 BTEC and the large NIST 2009 Arabic-to-English translation tasks. Our results show that for both small and large training data, segmentation yields strong improvements, but, the differences between the top ranked segmenters are statistically insignificant. Due to the different methodologies that we apply for segmentation, we expect a complimentary variation in the results achieved by each method. As done in previous work, we combine several segmentation schemes of the same model but achieve modest improvements. Next, we try a different strategy, where we combine the different segmentation methods rather than the different segmentation schemes. In this case, we achieve stronger improvements over the best single system. Finally, combining schemes and methods has another slight gain over the best combination strategy.
null
https://www.aclweb.org/anthology/L12-1279/
http://www.lrec-conf.org/proceedings/lrec2012/pdf/509_Paper.pdf
[ "Machine Translation" ]
[]
[]
eeiXs8OOvJ
https://paperswithcode.com/paper/single-shot-6d-object-pose-estimation
Single Shot 6D Object Pose Estimation
In this paper, we introduce a novel single shot approach for 6D object pose estimation of rigid objects based on depth images. For this purpose, a fully convolutional neural network is employed, where the 3D input data is spatially discretized and pose estimation is considered as a regression task that is solved locally on the resulting volume elements. With 65 fps on a GPU, our Object Pose Network (OP-Net) is extremely fast, is optimized end-to-end, and estimates the 6D pose of multiple objects in the image simultaneously. Our approach does not require manually 6D pose-annotated real-world datasets and transfers to the real world, although being entirely trained on synthetic data. The proposed method is evaluated on public benchmark datasets, where we can demonstrate that state-of-the-art methods are significantly outperformed.
2004.12729
https://arxiv.org/abs/2004.12729v1
https://arxiv.org/pdf/2004.12729v1.pdf
[ "6D Pose Estimation using RGB", "Pose Estimation" ]
[]
[]
fAi67idBGS
https://paperswithcode.com/paper/inherent-weight-normalization-in-stochastic
Inherent Weight Normalization in Stochastic Neural Networks
Multiplicative stochasticity such as Dropout improves the robustness and generalizability of deep neural networks. Here, we further demonstrate that always-on multiplicative stochasticity combined with simple threshold neurons are sufficient operations for deep neural networks. We call such models Neural Sampling Machines (NSM). We find that the probability of activation of the NSM exhibits a self-normalizing property that mirrors Weight Normalization, a previously studied mechanism that fulfills many of the features of Batch Normalization in an online fashion. The normalization of activities during training speeds up convergence by preventing internal covariate shift caused by changes in the input distribution. The always-on stochasticity of the NSM confers the following advantages: the network is identical in the inference and learning phases, making the NSM suitable for online learning, it can exploit stochasticity inherent to a physical substrate such as analog non-volatile memories for in-memory computing, and it is suitable for Monte Carlo sampling, while requiring almost exclusively addition and comparison operations. We demonstrate NSMs on standard classification benchmarks (MNIST and CIFAR) and event-based classification benchmarks (N-MNIST and DVS Gestures). Our results show that NSMs perform comparably or better than conventional artificial neural networks with the same architecture.
1910.12316
https://arxiv.org/abs/1910.12316v1
https://arxiv.org/pdf/1910.12316v1.pdf
[]
[ "Weight Normalization", "Dropout", "Batch Normalization" ]
[]
c45IPEAU_1
https://paperswithcode.com/paper/beyond-node-embedding-a-direct-unsupervised
Beyond Node Embedding: A Direct Unsupervised Edge Representation Framework for Homogeneous Networks
Network representation learning has traditionally been used to find lower dimensional vector representations of the nodes in a network. However, there are very important edge driven mining tasks of interest to the classical network analysis community, which have mostly been unexplored in the network embedding space. For applications such as link prediction in homogeneous networks, vector representation (i.e., embedding) of an edge is derived heuristically just by using simple aggregations of the embeddings of the end vertices of the edge. Clearly, this method of deriving edge embedding is suboptimal and there is a need for a dedicated unsupervised approach for embedding edges by leveraging edge properties of the network. Towards this end, we propose a novel concept of converting a network to its weighted line graph which is ideally suited to find the embedding of edges of the original network. We further derive a novel algorithm to embed the line graph, by introducing the concept of collective homophily. To the best of our knowledge, this is the first direct unsupervised approach for edge embedding in homogeneous information networks, without relying on the node embeddings. We validate the edge embeddings on three downstream edge mining tasks. Our proposed optimization framework for edge embedding also generates a set of node embeddings, which are not just the aggregation of edges. Further experimental analysis shows the connection of our framework to the concept of node centrality.
1912.05140
https://arxiv.org/abs/1912.05140v1
https://arxiv.org/pdf/1912.05140v1.pdf
[ "Link Prediction", "Network Embedding", "Representation Learning" ]
[ "LINE" ]
[]
tfFDdXqlOa
https://paperswithcode.com/paper/machine-intelligence-for-outcome-predictions
Machine Intelligence for Outcome Predictions of Trauma Patients During Emergency Department Care
Trauma mortality results from a multitude of non-linear dependent risk factors including patient demographics, injury characteristics, medical care provided, and characteristics of medical facilities; yet traditional approach attempted to capture these relationships using rigid regression models. We hypothesized that a transfer learning based machine learning algorithm could deeply understand a trauma patient's condition and accurately identify individuals at high risk for mortality without relying on restrictive regression model criteria. Anonymous patient visit data were obtained from years 2007-2014 of the National Trauma Data Bank. Patients with incomplete vitals, unknown outcome, or missing demographics data were excluded. All patient visits occurred in U.S. hospitals, and of the 2,007,485 encounters that were retrospectively examined, 8,198 resulted in mortality (0.4%). The machine intelligence model was evaluated on its sensitivity, specificity, positive and negative predictive value, and Matthews Correlation Coefficient. Our model achieved similar performance in age-specific comparison models and generalized well when applied to all ages simultaneously. While testing for confounding factors, we discovered that excluding fall-related injuries boosted performance for adult trauma patients; however, it reduced performance for children. The machine intelligence model described here demonstrates similar performance to contemporary machine intelligence models without requiring restrictive regression model criteria or extensive medical expertise.
2009.03873
https://arxiv.org/abs/2009.03873v2
https://arxiv.org/pdf/2009.03873v2.pdf
[ "Transfer Learning" ]
[]
[]
ArNOEmUuFw
https://paperswithcode.com/paper/what-question-answering-can-learn-from-trivia
What Question Answering can Learn from Trivia Nerds
In addition to the traditional task of getting machines to answer questions, a major research question in question answering is to create interesting, challenging questions that can help systems learn how to answer questions and also reveal which systems are the best at answering questions. We argue that creating a question answering dataset -- and the ubiquitous leaderboard that goes with it -- closely resembles running a trivia tournament: you write questions, have agents (either humans or machines) answer the questions, and declare a winner. However, the research community has ignored the decades of hard-learned lessons from decades of the trivia community creating vibrant, fair, and effective question answering competitions. After detailing problems with existing QA datasets, we outline the key lessons -- removing ambiguity, discriminating skill, and adjudicating disputes -- that can transfer to QA research and how they might be implemented for the QA community.
1910.14464
https://arxiv.org/abs/1910.14464v3
https://arxiv.org/pdf/1910.14464v3.pdf
[ "Question Answering" ]
[]
[]
JuTS5gdNAe
https://paperswithcode.com/paper/pretrain-to-finetune-adversarial-training-via
Pretrain-to-Finetune Adversarial Training via Sample-wise Randomized Smoothing
Developing certified models that can provably defense adversarial perturbations is important in machine learning security. Recently, randomized smoothing, combined with other techniques (Cohen et al., 2019; Salman et al., 2019), has been shown to be an effective method to certify models under $l_2$ perturbations. Existing work for certifying $l_2$ perturbations added the same level of Gaussian noise to each sample. The noise level determines the trade-off between the test accuracy and the average certified robust radius. We propose to further improve the defense via sample-wise randomized smoothing, which assigns different noise levels to different samples. Specifically, we propose a pretrain-to-finetune framework that first pretrains a model and then adjusts the noise levels for higher performance based on the model’s outputs. For certification, we carefully allocate specific robust regions for each test sample. We perform extensive experiments on CIFAR-10 and MNIST datasets and the experimental results demonstrate that our method can achieve better accuracy-robustness trade-off in the transductive setting.
null
https://openreview.net/forum?id=Te1aZ2myPIu
https://openreview.net/pdf?id=Te1aZ2myPIu
[]
[]
[]
xLRonpmdTo
https://paperswithcode.com/paper/covid-19base-a-knowledgebase-to-explore
COVID-19Base: A knowledgebase to explore biomedical entities related to COVID-19
We are presenting COVID-19Base, a knowledgebase highlighting the biomedical entities related to COVID-19 disease based on literature mining. To develop COVID-19Base, we mine the information from publicly available scientific literature and related public resources. We considered seven topic-specific dictionaries, including human genes, human miRNAs, human lncRNAs, diseases, Protein Databank, drugs, and drug side effects, are integrated to mine all scientific evidence related to COVID-19. We have employed an automated literature mining and labeling system through a novel approach to measure the effectiveness of drugs against diseases based on natural language processing, sentiment analysis, and deep learning. To the best of our knowledge, this is the first knowledgebase dedicated to COVID-19, which integrates such large variety of related biomedical entities through literature mining. Proper investigation of the mined biomedical entities along with the identified interactions among those, reported in COVID-19Base, would help the research community to discover possible ways for the therapeutic treatment of COVID-19.
2005.05954
https://arxiv.org/abs/2005.05954v1
https://arxiv.org/pdf/2005.05954v1.pdf
[ "Sentiment Analysis" ]
[]
[]
Xm0XngH9Hr
https://paperswithcode.com/paper/novel-radiomic-feature-for-survival
Novel Radiomic Feature for Survival Prediction of Lung Cancer Patients using Low-Dose CBCT Images
Prediction of survivability in a patient for tumor progression is useful to estimate the effectiveness of a treatment protocol. In our work, we present a model to take into account the heterogeneous nature of a tumor to predict survival. The tumor heterogeneity is measured in terms of its mass by combining information regarding the radiodensity obtained in images with the gross tumor volume (GTV). We propose a novel feature called Tumor Mass within a GTV (TMG), that improves the prediction of survivability, compared to existing models which use GTV. Weekly variation in TMG of a patient is computed from the image data and also estimated from a cell survivability model. The parameters obtained from the cell survivability model are indicatives of changes in TMG over the treatment period. We use these parameters along with other patient metadata to perform survival analysis and regression. Cox's Proportional Hazard survival regression was performed using these data. Significant improvement in the average concordance index from 0.47 to 0.64 was observed when TMG is used in the model instead of GTV. The experiments show that there is a difference in the treatment response in responsive and non-responsive patients and that the proposed method can be used to predict patient survivability.
2003.03537
https://arxiv.org/abs/2003.03537v1
https://arxiv.org/pdf/2003.03537v1.pdf
[ "Survival Analysis" ]
[]
[]
mYA4KSYsQg
https://paperswithcode.com/paper/neural-networks-versus-logistic-regression
Neural networks versus Logistic regression for 30 days all-cause readmission prediction
Heart failure (HF) is one of the leading causes of hospital admissions in the US. Readmission within 30 days after a HF hospitalization is both a recognized indicator for disease progression and a source of considerable financial burden to the healthcare system. Consequently, the identification of patients at risk for readmission is a key step in improving disease management and patient outcome. In this work, we used a large administrative claims dataset to (1)explore the systematic application of neural network-based models versus logistic regression for predicting 30 days all-cause readmission after discharge from a HF admission, and (2)to examine the additive value of patients' hospitalization timelines on prediction performance. Based on data from 272,778 (49% female) patients with a mean (SD) age of 73 years (14) and 343,328 HF admissions (67% of total admissions), we trained and tested our predictive readmission models following a stratified 5-fold cross-validation scheme. Among the deep learning approaches, a recurrent neural network (RNN) combined with conditional random fields (CRF) model (RNNCRF) achieved the best performance in readmission prediction with 0.642 AUC (95% CI, 0.640-0.645). Other models, such as those based on RNN, convolutional neural networks and CRF alone had lower performance, with a non-timeline based model (MLP) performing worst. A competitive model based on logistic regression with LASSO achieved a performance of 0.643 AUC (95%CI, 0.640-0.646). We conclude that data from patient timelines improve 30 day readmission prediction for neural network-based models, that a logistic regression with LASSO has equal performance to the best neural network model and that the use of administrative data result in competitive performance compared to published approaches based on richer clinical datasets.
1812.09549
http://arxiv.org/abs/1812.09549v1
http://arxiv.org/pdf/1812.09549v1.pdf
[ "Readmission Prediction" ]
[ "Logistic Regression" ]
[]
PGs2njtXWk
https://paperswithcode.com/paper/successive-point-of-interest-recommendation
Successive Point-of-Interest Recommendation with Local Differential Privacy
A point-of-interest (POI) recommendation system plays an important role in location-based services (LBS) because it can help people to explore new locations and promote advertisers to launch ads to target users. Exiting POI recommendation methods need users' raw check-in data, which can raise location privacy breaches. Even worse, several privacy-preserving recommendation systems could not utilize the transition pattern in the human movement. To address these problems, we propose Successive Point-of-Interest REcommendation with Local differential privacy (SPIREL) framework. SPIREL employs two types of sources from users' check-in history: a transition pattern between two POIs and visiting counts of POIs. We propose a novel objective function for learning the user-POI and POI-POI relationships simultaneously. We further propose two privacy-preserving mechanisms to train our recommendation system. Experiments using two public datasets demonstrate that SPIREL achieves better POI recommendation quality while preserving stronger privacy for check-in history.
1908.09485
https://arxiv.org/abs/1908.09485v1
https://arxiv.org/pdf/1908.09485v1.pdf
[ "Recommendation Systems" ]
[]
[]
yXIDM6swFu
https://paperswithcode.com/paper/metaphor-detection-using-ensembles-of
Metaphor Detection using Ensembles of Bidirectional Recurrent Neural Networks
In this paper we present our results from the Second Shared Task on Metaphor Detection, hosted by the Second Workshop on Figurative Language Processing. We use an ensemble of RNN models with bidirectional LSTMs and bidirectional attention mechanisms. Some of the models were trained on all parts of speech. Each of the other models was trained on one of four categories for parts of speech: {``}nouns{''}, {``}verbs{''}, {``}adverbs/adjectives{''}, or {``}other{''}. The models were combined into voting pools and the voting pools were combined using the logical {``}OR{''} operator.
null
https://www.aclweb.org/anthology/2020.figlang-1.33/
https://www.aclweb.org/anthology/2020.figlang-1.33
[]
[]
[]
tdpjCd9qAp
https://paperswithcode.com/paper/fast-saddle-point-algorithm-for-generalized
Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with the Ordered l1-Norm
In this paper we propose a primal-dual proximal extragradient algorithm to solve the generalized Dantzig selector (GDS) estimation problem, based on a new convex-concave saddle-point (SP) reformulation. Our new formulation makes it possible to adopt recent developments in saddle-point optimization, to achieve the optimal $O(1/k)$ rate of convergence. Compared to the optimal non-SP algorithms, ours do not require specification of sensitive parameters that affect algorithm performance or solution quality. We also provide a new analysis showing a possibility of local acceleration to achieve the rate of $O(1/k^2)$ in special cases even without strong convexity or strong smoothness. As an application, we propose a GDS equipped with the ordered $\ell_1$-norm, showing its false discovery rate control properties in variable selection. Algorithm performance is compared between ours and other alternatives, including the linearized ADMM, Nesterov's smoothing, Nemirovski's mirror-prox, and the accelerated hybrid proximal extragradient techniques.
1511.05864
http://arxiv.org/abs/1511.05864v3
http://arxiv.org/pdf/1511.05864v3.pdf
[]
[ "ADMM" ]
[]
Su3wrT-_MQ
https://paperswithcode.com/paper/lightweight-residual-network-for-the
Lightweight Residual Network for The Classification of Thyroid Nodules
Ultrasound is a useful technique for diagnosing thyroid nodules. Benign and malignant nodules that automatically discriminate in the ultrasound pictures can provide diagnostic recommendations or, improve diagnostic accuracy in the absence of specialists. The main issue here is how to collect suitable features for this particular task. We suggest here a technique for extracting features from ultrasound pictures based on the Residual U-net. We attempt to introduce significant semantic characteristics to the classification. Our model gained 95% classification accuracy.
1911.08303
https://arxiv.org/abs/1911.08303v1
https://arxiv.org/pdf/1911.08303v1.pdf
[]
[]
[]
qg97JPkseR
https://paperswithcode.com/paper/a-comprehensive-analysis-of-information
A Comprehensive Analysis of Information Leakage in Deep Transfer Learning
Transfer learning is widely used for transferring knowledge from a source domain to the target domain where the labeled data is scarce. Recently, deep transfer learning has achieved remarkable progress in various applications. However, the source and target datasets usually belong to two different organizations in many real-world scenarios, potential privacy issues in deep transfer learning are posed. In this study, to thoroughly analyze the potential privacy leakage in deep transfer learning, we first divide previous methods into three categories. Based on that, we demonstrate specific threats that lead to unintentional privacy leakage in each category. Additionally, we also provide some solutions to prevent these threats. To the best of our knowledge, our study is the first to provide a thorough analysis of the information leakage issues in deep transfer learning methods and provide potential solutions to the issue. Extensive experiments on two public datasets and an industry dataset are conducted to show the privacy leakage under different deep transfer learning settings and defense solution effectiveness.
2009.01989
https://arxiv.org/abs/2009.01989v1
https://arxiv.org/pdf/2009.01989v1.pdf
[ "Transfer Learning" ]
[]
[]
zVtmuFJQ5e
https://paperswithcode.com/paper/finding-statistically-significant-communities
Finding statistically significant communities in networks
Community structure is one of the main structural features of networks, revealing both their internal organization and the similarity of their elementary units. Despite the large variety of methods proposed to detect communities in graphs, there is a big need for multi-purpose techniques, able to handle different types of datasets and the subtleties of community structure. In this paper we present OSLOM (Order Statistics Local Optimization Method), the first method capable to detect clusters in networks accounting for edge directions, edge weights, overlapping communities, hierarchies and community dynamics. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations, which is estimated with tools of Extreme and Order Statistics. OSLOM can be used alone or as a refinement procedure of partitions/covers delivered by other techniques. We have also implemented sequential algorithms combining OSLOM with other fast techniques, so that the community structure of very large networks can be uncovered. Our method has a comparable performance as the best existing algorithms on artificial benchmark graphs. Several applications on real networks are shown as well. OSLOM is implemented in a freely available software (http://www.oslom.org), and we believe it will be a valuable tool in the analysis of networks.
1012.2363
https://arxiv.org/abs/1012.2363v2
https://arxiv.org/pdf/1012.2363v2.pdf
[]
[]
[]
2z9Up03Vp6
https://paperswithcode.com/paper/when-compressive-learning-fails-blame-the
When compressive learning fails: blame the decoder or the sketch?
In compressive learning, a mixture model (a set of centroids or a Gaussian mixture) is learned from a sketch vector, that serves as a highly compressed representation of the dataset. This requires solving a non-convex optimization problem, hence in practice approximate heuristics (such as CLOMPR) are used. In this work we explore, by numerical simulations, properties of this non-convex optimization landscape and those heuristics.
2009.08273
https://arxiv.org/abs/2009.08273v1
https://arxiv.org/pdf/2009.08273v1.pdf
[]
[]
[]
SuljoLi9wP
https://paperswithcode.com/paper/back-to-rgb-3d-tracking-of-hands-and-hand
Back to RGB: 3D tracking of hands and hand-object interactions based on short-baseline stereo
We present a novel solution to the problem of 3D tracking of the articulated motion of human hand(s), possibly in interaction with other objects. The vast majority of contemporary relevant work capitalizes on depth information provided by RGBD cameras. In this work, we show that accurate and efficient 3D hand tracking is possible, even for the case of RGB stereo. A straightforward approach for solving the problem based on such input would be to first recover depth and then apply a state of the art depth-based 3D hand tracking method. Unfortunately, this does not work well in practice because the stereo-based, dense 3D reconstruction of hands is far less accurate than the one obtained by RGBD cameras. Our approach bypasses 3D reconstruction and follows a completely different route: 3D hand tracking is formulated as an optimization problem whose solution is the hand configuration that maximizes the color consistency between the two views of the hand. We demonstrate the applicability of our method for real time tracking of a single hand, of a hand manipulating an object and of two interacting hands. The method has been evaluated quantitatively on standard datasets and in comparison to relevant, state of the art RGBD-based approaches. The obtained results demonstrate that the proposed stereo-based method performs equally well to its RGBD-based competitors, and in some cases, it even outperforms them.
1705.05301
http://arxiv.org/abs/1705.05301v1
http://arxiv.org/pdf/1705.05301v1.pdf
[ "3D Reconstruction" ]
[]
[]
4av5kmMul7
https://paperswithcode.com/paper/clipper-a-low-latency-online-prediction
Clipper: A Low-Latency Online Prediction Serving System
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy query load. However, most machine learning frameworks and systems only address model training and not deployment. In this paper, we introduce Clipper, a general-purpose low-latency prediction serving system. Interposing between end-user applications and a wide range of machine learning frameworks, Clipper introduces a modular architecture to simplify model deployment across frameworks and applications. Furthermore, by introducing caching, batching, and adaptive model selection techniques, Clipper reduces prediction latency and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. We evaluate Clipper on four common machine learning benchmark datasets and demonstrate its ability to meet the latency, accuracy, and throughput demands of online serving applications. Finally, we compare Clipper to the TensorFlow Serving system and demonstrate that we are able to achieve comparable throughput and latency while enabling model composition and online learning to improve accuracy and render more robust predictions.
1612.03079
http://arxiv.org/abs/1612.03079v2
http://arxiv.org/pdf/1612.03079v2.pdf
[ "Model Selection" ]
[]
[]
zusAlGQ5mD
https://paperswithcode.com/paper/n-ode-transformer-a-depth-adaptive-variant-of
N-ODE Transformer: A Depth-Adaptive Variant of the Transformer Using Neural Ordinary Differential Equations
We use neural ordinary differential equations to formulate a variant of the Transformer that is depth-adaptive in the sense that an input-dependent number of time steps is taken by the ordinary differential equation solver. Our goal in proposing the N-ODE Transformer is to investigate whether its depth-adaptivity may aid in overcoming some specific known theoretical limitations of the Transformer in handling nonlocal effects. Specifically, we consider the simple problem of determining the parity of a binary sequence, for which the standard Transformer has known limitations that can only be overcome by using a sufficiently large number of layers or attention heads. We find, however, that the depth-adaptivity of the N-ODE Transformer does not provide a remedy for the inherently nonlocal nature of the parity problem, and provide explanations for why this is so. Next, we pursue regularization of the N-ODE Transformer by penalizing the arclength of the ODE trajectories, but find that this fails to improve the accuracy or efficiency of the N-ODE Transformer on the challenging parity problem. We suggest future avenues of research for modifications and extensions of the N-ODE Transformer that may lead to improved accuracy and efficiency for sequence modelling tasks such as neural machine translation.
2010.11358
https://arxiv.org/abs/2010.11358v1
https://arxiv.org/pdf/2010.11358v1.pdf
[ "Machine Translation" ]
[ "Residual Connection", "Adam", "Dense Connections", "Softmax", "Multi-Head Attention", "Scaled Dot-Product Attention", "Transformer" ]
[]
vlE3zqEemS
https://paperswithcode.com/paper/rendu-base-image-avec-contraintes-sur-les
Rendu basé image avec contraintes sur les gradients
Multi-view image-based rendering consists in generating a novel view of a scene from a set of source views. In general, this works by first doing a coarse 3D reconstruction of the scene, and then using this reconstruction to establish correspondences between source and target views, followed by blending the warped views to get the final image. Unfortunately, discontinuities in the blending weights, due to scene geometry or camera placement, result in artifacts in the target view. In this paper, we show how to avoid these artifacts by imposing additional constraints on the image gradients of the novel view. We propose a variational framework in which an energy functional is derived and optimized by iteratively solving a linear system. We demonstrate this method on several structured and unstructured multi-view datasets, and show that it numerically outperforms state-of-the-art methods, and eliminates artifacts that result from visibility discontinuities
1812.11339
http://arxiv.org/abs/1812.11339v1
http://arxiv.org/pdf/1812.11339v1.pdf
[ "3D Reconstruction" ]
[]
[]
bTpVIHN0CW
https://paperswithcode.com/paper/collaborative-planning-for-mixed-autonomy
Collaborative Planning for Mixed-Autonomy Lane Merging
Driving is a social activity: drivers often indicate their intent to change lanes via motion cues. We consider mixed-autonomy traffic where a Human-driven Vehicle (HV) and an Autonomous Vehicle (AV) drive together. We propose a planning framework where the degree to which the AV considers the other agent's reward is controlled by a selfishness factor. We test our approach on a simulated two-lane highway where the AV and HV merge into each other's lanes. In a user study with 21 subjects and 6 different selfishness factors, we found that our planning approach was sound and that both agents had less merging times when a factor that balances the rewards for the two agents was chosen. Our results on double lane merging suggest it to be a non-zero-sum game and encourage further investigation on collaborative decision making algorithms for mixed-autonomy traffic.
1808.02550
http://arxiv.org/abs/1808.02550v1
http://arxiv.org/pdf/1808.02550v1.pdf
[ "Decision Making" ]
[]
[]
1pH1X-MG14
https://paperswithcode.com/paper/using-latinflexi-for-an-entropy-based
Using LatInfLexi for an Entropy-Based Assessment of Predictability in Latin Inflection
This paper presents LatInfLexi, a large inflected lexicon of Latin providing information on all the inflected wordforms of 3,348 verbs and 1,038 nouns. After a description of the structure of the resource and some data on its size, the procedure followed to obtain the lexicon from the database of the Lemlat 3.0 morphological analyzer is detailed, as well as the choices made regarding overabundant and defective cells. The way in which the data of LatInfLexi can be exploited in order to perform a quantitative assessment of predictability in Latin verb inflection is then illustrated: results obtained by computing the conditional entropy of guessing the content of a paradigm cell assuming knowledge of one wordform or multiple wordforms are presented in turn, highlighting the descriptive and theoretical relevance of the analysis. Lastly, the paper envisages the advantages of an inclusion of LatInfLexi into the LiLa knowledge base, both for the presented resource and for the knowledge base itself.
null
https://www.aclweb.org/anthology/2020.lt4hala-1.6/
https://www.aclweb.org/anthology/2020.lt4hala-1.6
[]
[]
[]
XMPzceZSse
https://paperswithcode.com/paper/a-simple-efficient-density-estimator-that
A simple efficient density estimator that enables fast systematic search
This paper introduces a simple and efficient density estimator that enables fast systematic search. To show its advantage over commonly used kernel density estimator, we apply it to outlying aspects mining. Outlying aspects mining discovers feature subsets (or subspaces) that describe how a query stand out from a given dataset. The task demands a systematic search of subspaces. We identify that existing outlying aspects miners are restricted to datasets with small data size and dimensions because they employ kernel density estimator, which is computationally expensive, for subspace assessments. We show that a recent outlying aspects miner can run orders of magnitude faster by simply replacing its density estimator with the proposed density estimator, enabling it to deal with large datasets with thousands of dimensions that would otherwise be impossible.
1707.00783
http://arxiv.org/abs/1707.00783v2
http://arxiv.org/pdf/1707.00783v2.pdf
[ "Small Data Image Classification" ]
[]
[]
pobKe792ce
https://paperswithcode.com/paper/on-the-reduction-of-variance-and
On the Reduction of Variance and Overestimation of Deep Q-Learning
The breakthrough of deep Q-Learning on different types of environments revolutionized the algorithmic design of Reinforcement Learning to introduce more stable and robust algorithms, to that end many extensions to deep Q-Learning algorithm have been proposed to reduce the variance of the target values and the overestimation phenomena. In this paper, we examine new methodology to solve these issues, we propose using Dropout techniques on deep Q-Learning algorithm as a way to reduce variance and overestimation. We further present experiments on some of the benchmark environments that demonstrate significant improvement of the stability of the performance and a reduction in variance and overestimation.
1910.05983
https://arxiv.org/abs/1910.05983v1
https://arxiv.org/pdf/1910.05983v1.pdf
[ "Q-Learning" ]
[ "Q-Learning", "Dropout" ]
[]
v6oO-IrNiT
https://paperswithcode.com/paper/dual-subtitles-as-parallel-corpora
Dual Subtitles as Parallel Corpora
In this paper, we leverage the existence of dual subtitles as a source of parallel data. Dual subtitles present viewers with two languages simultaneously, and are generally aligned in the segment level, which removes the need to automatically perform this alignment. This is desirable as extracted parallel data does not contain alignment errors present in previous work that aligns different subtitle files for the same movie. We present a simple heuristic to detect and extract dual subtitles and show that more than 20 million sentence pairs can be extracted for the Mandarin-English language pair. We also show that extracting data from this source can be a viable solution for improving Machine Translation systems in the domain of subtitles.
null
https://www.aclweb.org/anthology/L14-1137/
http://www.lrec-conf.org/proceedings/lrec2014/pdf/1199_Paper.pdf
[ "Machine Translation", "Word Sense Disambiguation" ]
[]
[]
MbhS2NW-ic
https://paperswithcode.com/paper/robust-physical-world-attacks-on-deep-1
Robust Physical-World Attacks on Deep Learning Visual Classification
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP 2 ), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP 2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle (field test) for the target classifier.
null
http://openaccess.thecvf.com/content_cvpr_2018/html/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Eykholt_Robust_Physical-World_Attacks_CVPR_2018_paper.pdf
[]
[]
[]
5zo1q4D1X_
https://paperswithcode.com/paper/a-convolutional-approach-to-vertebrae
A Convolutional Approach to Vertebrae Detection and Labelling in Whole Spine MRI
We propose a novel convolutional method for the detection and identification of vertebrae in whole spine MRIs. This involves using a learnt vector field to group detected vertebrae corners together into individual vertebral bodies and convolutional image-to-image translation followed by beam search to label vertebral levels in a self-consistent manner. The method can be applied without modification to lumbar, cervical and thoracic-only scans across a range of different MR sequences. The resulting system achieves 98.1% detection rate and 96.5% identification rate on a challenging clinical dataset of whole spine scans and matches or exceeds the performance of previous systems on lumbar-only scans. Finally, we demonstrate the clinical applicability of this method, using it for automated scoliosis detection in both lumbar and whole spine MR scans.
2007.02606
https://arxiv.org/abs/2007.02606v3
https://arxiv.org/pdf/2007.02606v3.pdf
[ "Image-to-Image Translation" ]
[]
[]
Z_RDNRXXhj
https://paperswithcode.com/paper/if-dropout-limits-trainable-depth-does
If dropout limits trainable depth, does critical initialisation still matter? A large-scale statistical analysis on ReLU networks
Recent work in signal propagation theory has shown that dropout limits the depth to which information can propagate through a neural network. In this paper, we investigate the effect of initialisation on training speed and generalisation for ReLU networks within this depth limit. We ask the following research question: given that critical initialisation is crucial for training at large depth, if dropout limits the depth at which networks are trainable, does initialising critically still matter? We conduct a large-scale controlled experiment, and perform a statistical analysis of over $12000$ trained networks. We find that (1) trainable networks show no statistically significant difference in performance over a wide range of non-critical initialisations; (2) for initialisations that show a statistically significant difference, the net effect on performance is small; (3) only extreme initialisations (very small or very large) perform worse than criticality. These findings also apply to standard ReLU networks of moderate depth as a special case of zero dropout. Our results therefore suggest that, in the shallow-to-moderate depth setting, critical initialisation provides zero performance gains when compared to off-critical initialisations and that searching for off-critical initialisations that might improve training speed or generalisation, is likely to be a fruitless endeavour.
1910.05725
https://arxiv.org/abs/1910.05725v2
https://arxiv.org/pdf/1910.05725v2.pdf
[]
[ "ReLU", "Dropout" ]
[]
g_DfPAB0o0
https://paperswithcode.com/paper/inferring-analogous-attributes
Inferring Analogous Attributes
The appearance of an attribute can vary considerably from class to class (e.g., a "fluffy" dog vs. a "fluffy" towel), making standard class-independent attribute models break down. Yet, training object-specific models for each attribute can be impractical, and defeats the purpose of using attributes to bridge category boundaries. We propose a novel form of transfer learning that addresses this dilemma. We develop a tensor factorization approach which, given a sparse set of class-specific attribute classifiers, can infer new ones for object-attribute pairs unobserved during training. For example, even though the system has no labeled images of striped dogs, it can use its knowledge of other attributes and objects to tailor "stripedness" to the dog category. With two large-scale datasets, we demonstrate both the need for category-sensitive attributes as well as our method's successful transfer. Our inferred attribute classifiers perform similarly well to those trained with the luxury of labeled class-specific instances, and much better than those restricted to traditional modes of transfer.
null
http://openaccess.thecvf.com/content_cvpr_2014/html/Chen_Inferring_Analogous_Attributes_2014_CVPR_paper.html
http://openaccess.thecvf.com/content_cvpr_2014/papers/Chen_Inferring_Analogous_Attributes_2014_CVPR_paper.pdf
[ "Transfer Learning" ]
[]
[]
Kat6JqEnbE
https://paperswithcode.com/paper/rethinking-monocular-depth-estimation-with
Rethinking Monocular Depth Estimation with Adversarial Training
Monocular depth estimation is an extensively studied computer vision problem with a vast variety of applications. Deep learning-based methods have demonstrated promise for both supervised and unsupervised depth estimation from monocular images. Most existing approaches treat depth estimation as a regression problem with a local pixel-wise loss function. In this work, we innovate beyond existing approaches by using adversarial training to learn a context-aware, non-local loss function. Such an approach penalizes the joint configuration of predicted depth values at the patch-level instead of the pixel-level, which allows networks to incorporate more global information. In this framework, the generator learns a mapping between RGB images and its corresponding depth map, while the discriminator learns to distinguish depth map and RGB pairs from ground truth. This conditional GAN depth estimation framework is stabilized using spectral normalization to prevent mode collapse when learning from diverse datasets. We test this approach using a diverse set of generators that include U-Net and joint CNN-CRF. We benchmark this approach on the NYUv2, Make3D and KITTI datasets, and observe that adversarial training reduces relative error by several fold, achieving state-of-the-art performance.
1808.07528
https://arxiv.org/abs/1808.07528v3
https://arxiv.org/pdf/1808.07528v3.pdf
[ "Depth Estimation", "Monocular Depth Estimation" ]
[ "Concatenated Skip Connection", "ReLU", "Max Pooling", "U-Net", "Convolution", "Spectral Normalization", "GAN" ]
[]
mTnyIdP805
https://paperswithcode.com/paper/message-passing-for-probabilistic-models-on
Message passing for probabilistic models on networks with loops
In this paper, we extend a recently proposed framework for message passing on "loopy" networks to the solution of probabilistic models. We derive a self-consistent set of message passing equations that allow for fast computation of probability distributions in systems that contain short loops, potentially with high density, as well as expressions for the entropy and partition function of such systems, which are notoriously difficult quantities to compute. Using the Ising model as an example, we show that our solutions are asymptotically exact on certain classes of networks with short loops and offer a good approximation on more general networks, improving significantly on results derived from standard belief propagation. We also discuss potential applications of our method to a variety of other problems.
2009.12246
https://arxiv.org/abs/2009.12246v1
https://arxiv.org/pdf/2009.12246v1.pdf
[]
[]
[]
UfkN1GsJ34
https://paperswithcode.com/paper/extracting-weighted-language-lexicons-from
Extracting Weighted Language Lexicons from Wikipedia
Language models are used in applications as diverse as speech recognition, optical character recognition and information retrieval. They are used to predict word appearance, and to weight the importance of words in these applications. One basic element of language models is the list of words in a language. Another is the unigram frequency of each word. But this basic information is not available for most languages in the world. Since the multilingual Wikipedia project encourages the production of encyclopedic-like articles in many world languages, we can find there an ever-growing source of text from which to extract these two language modelling elements: word list and frequency. Here we present a simple technique for converting this Wikipedia text into lexicons of weighted unigrams for the more than 280 languages present currently present in Wikipedia. The lexicons produced, and the source code for producing them in a Linux-based system are here made available for free on the Web.
null
https://www.aclweb.org/anthology/L16-1217/
https://www.aclweb.org/anthology/L16-1217
[ "Information Retrieval", "Language Modelling", "Optical Character Recognition", "Speech Recognition" ]
[]
[]
BCHmuG2jRn
https://paperswithcode.com/paper/communication-computation-efficient-secure
Communication-Computation Efficient Secure Aggregation for Federated Learning
Federated learning has been spotlighted as a way to train neural network models using data distributed over multiple clients without a need to share private data. Unfortunately, however, it has been shown that data privacy could not be fully guaranteed as adversaries may be able to extract certain information on local data from the model parameters transmitted during federated learning. A recent solution based on the secure aggregation primitive enables privacy-preserving federated learning, but at the expense of significant extra communication/computational resources. In this paper, we propose communication-computation efficient secure aggregation which reduces the amount of communication/computational resources at least by a factor of $\sqrt{n/ \log n}$ relative to the existing secure solution without sacrificing data privacy, where $n$ is the number of clients. The key idea behind the suggested scheme is to design the topology of the secret-sharing nodes (denoted by the assignment graph $G$) as sparse random graphs instead of the complete graph corresponding to the existing solution. We first obtain a sufficient condition on $G$ to guarantee reliable and private federated learning. Afterwards, we suggest using the Erd\H{o}s-Rényi graph as $G$, and provide theoretical guarantees on the reliability/privacy of the proposed scheme. Through extensive real-world experiments, we demonstrate that our scheme, using only 50% of the resources required in the conventional scheme, maintains virtually the same levels of reliability and data privacy in practical federated learning systems.
null
https://openreview.net/forum?id=0h9cYBqucS6
https://openreview.net/pdf?id=0h9cYBqucS6
[ "Federated Learning" ]
[]
[]
1i-fCSHvrA
https://paperswithcode.com/paper/dirac-delta-regression-conditional-density
Dirac Delta Regression: Conditional Density Estimation with Clinical Trials
Personalized medicine seeks to identify the causal effect of treatment for a particular patient as opposed to a clinical population at large. Most investigators estimate such personalized treatment effects by regressing the outcome of a randomized clinical trial (RCT) on patient covariates. The realized value of the outcome may however lie far from the conditional expectation. We therefore introduce a method called Dirac Delta Regression (DDR) that estimates the entire conditional density from RCT data in order to visualize the probabilities across all possible treatment outcomes. DDR transforms the outcome into a set of asymptotically Dirac delta distributions and then estimates the density using non-linear regression. The algorithm can identify significant patient-specific treatment effects even when no population level effect exists. Moreover, DDR outperforms state-of-the-art algorithms in conditional density estimation on average regardless of the need for causal inference.
1905.10330
https://arxiv.org/abs/1905.10330v1
https://arxiv.org/pdf/1905.10330v1.pdf
[ "Causal Inference", "Density Estimation" ]
[]
[]
LtAdUh4G51
https://paperswithcode.com/paper/facial-aging-and-rejuvenation-by-conditional
Facial Aging and Rejuvenation by Conditional Multi-Adversarial Autoencoder with Ordinal Regression
Facial aging and facial rejuvenation analyze a given face photograph to predict a future look or estimate a past look of the person. To achieve this, it is critical to preserve human identity and the corresponding aging progression and regression with high accuracy. However, existing methods cannot simultaneously handle these two objectives well. We propose a novel generative adversarial network based approach, named the Conditional Multi-Adversarial AutoEncoder with Ordinal Regression (CMAAE-OR). It utilizes an age estimation technique to control the aging accuracy and takes a high-level feature representation to preserve personalized identity. Specifically, the face is first mapped to a latent vector through a convolutional encoder. The latent vector is then projected onto the face manifold conditional on the age through a deconvolutional generator. The latent vector preserves personalized face features and the age controls facial aging and rejuvenation. A discriminator and an ordinal regression are imposed on the encoder and the generator in tandem, making the generated face images to be more photorealistic while simultaneously exhibiting desirable aging effects. Besides, a high-level feature representation is utilized to preserve personalized identity of the generated face. Experiments on two benchmark datasets demonstrate appealing performance of the proposed method over the state-of-the-art.
1804.02740
http://arxiv.org/abs/1804.02740v1
http://arxiv.org/pdf/1804.02740v1.pdf
[ "Age Estimation" ]
[]
[]
0Vh_3AhRlR
https://paperswithcode.com/paper/random-forest-regression-for-manifold-valued
Random Forest regression for manifold-valued responses
An increasing array of biomedical and computer vision applications requires the predictive modeling of complex data, for example images and shapes. The main challenge when predicting such objects lies in the fact that they do not comply to the assumptions of Euclidean geometry. Rather, they occupy non-linear spaces, a.k.a. manifolds, where it is difficult to define concepts such as coordinates, vectors and expected values. In this work, we construct a non-parametric predictive methodology for manifold-valued objects, based on a distance modification of the Random Forest algorithm. Our method is versatile and can be applied both in cases where the response space is a well-defined manifold, but also when such knowledge is not available. Model fitting and prediction phases only require the definition of a suitable distance function for the observed responses. We validate our methodology using simulations and apply it on a series of illustrative image completion applications, showcasing superior predictive performance, compared to various established regression methods.
1701.08381
http://arxiv.org/abs/1701.08381v2
http://arxiv.org/pdf/1701.08381v2.pdf
[]
[]
[]
totaq2jqu-
https://paperswithcode.com/paper/simulating-user-learning-in-authoritative
Simulating user learning in authoritative technology adoption: An agent based model for council-led smart meter deployment planning in the UK
How do technology users effectively transit from having zero knowledge about a technology to making the best use of it after an authoritative technology adoption? This post-adoption user learning has received little research attention in technology management literature. In this paper we investigate user learning in authoritative technology adoption by developing an agent-based model using the case of council-led smart meter deployment in the UK City of Leeds. Energy consumers gain experience of using smart meters based on the learning curve in behavioural learning. With the agent-based model we carry out experiments to validate the model and test different energy interventions that local authorities can use to facilitate energy consumers' learning and maintain their continuous use of the technology. Our results show that the easier energy consumers become experienced, the more energy-efficient they are and the more energy saving they can achieve; encouraging energy consumers' contacts via various informational means can facilitate their learning; and developing and maintaining their positive attitude toward smart metering can enable them to use the technology continuously. Contributions and energy policy/intervention implications are discussed in this paper.
1607.05912
http://arxiv.org/abs/1607.05912v1
http://arxiv.org/pdf/1607.05912v1.pdf
[]
[]
[]
rCZGhcm3lS
https://paperswithcode.com/paper/cross-corpus-data-augmentation-for-acoustic
Cross-Corpus Data Augmentation for Acoustic Addressee Detection
Acoustic addressee detection (AD) is a modern paralinguistic and dialogue challenge that especially arises in voice assistants. In the present study, we distinguish addressees in two settings (a conversation between several people and a spoken dialogue system, and a conversation between several adults and a child) and introduce the first competitive baseline (unweighted average recall equals 0.891) for the Voice Assistant Conversation Corpus that models the first setting. We jointly solve both classification problems, using three models: a linear support vector machine dealing with acoustic functionals and two neural networks utilising raw waveforms alongside with acoustic low-level descriptors. We investigate how different corpora influence each other, applying the mixup approach to data augmentation. We also study the influence of various acoustic context lengths on AD. Two-second speech fragments turn out to be sufficient for reliable AD. Mixup is shown to be beneficial for merging acoustic data (extracted features but not raw waveforms) from different domains that allows us to reach a higher classification performance on human-machine AD and also for training a multipurpose neural network that is capable of solving both human-machine and adult-child AD problems.
null
https://www.aclweb.org/anthology/W19-5933/
https://www.aclweb.org/anthology/W19-5933
[ "Data Augmentation" ]
[ "Mixup" ]
[]
m6qxxoVQ0k
https://paperswithcode.com/paper/self-adversarial-learning-with-comparative-1
Self-Adversarial Learning with Comparative Discrimination for Text Generation
Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples. To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation. In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples. During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples. This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse. Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation.
2001.11691
https://arxiv.org/abs/2001.11691v2
https://arxiv.org/pdf/2001.11691v2.pdf
[ "Text Generation" ]
[]
[]
oYTu0_xuan
https://paperswithcode.com/paper/closed-loop-matters-dual-regression-networks
Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution
Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations to existing SR methods. First, learning the mapping function from LR to HR images is typically an ill-posed problem, because there exist infinite HR images that can be downsampled to the same LR image. As a result, the space of the possible functions can be extremely large, which makes it hard to find a good solution. Second, the paired LR-HR data may be unavailable in real-world applications and the underlying degradation method is often unknown. For such a more general case, existing SR models often incur the adaptation problem and yield poor performance. To address the above issues, we propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions. Specifically, besides the mapping from LR to HR images, we learn an additional dual regression mapping estimates the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. More critically, since the dual regression process does not depend on HR images, we can directly learn from LR images. In this sense, we can easily adapt SR models to real-world data, e.g., raw video frames from YouTube. Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.
2003.07018
https://arxiv.org/abs/2003.07018v4
https://arxiv.org/pdf/2003.07018v4.pdf
[ "Image Super-Resolution", "Super Resolution", "Super-Resolution" ]
[]
[]
9-MU2AHd7V
https://paperswithcode.com/paper/metamta-metalearning-method-leveraging
MetaMT,a MetaLearning Method Leveraging Multiple Domain Data for Low Resource Machine Translation
Manipulating training data leads to robust neural models for MT.
1912.05467
https://arxiv.org/abs/1912.05467v1
https://arxiv.org/pdf/1912.05467v1.pdf
[ "Machine Translation" ]
[]
[]
goWs6XkOXl
https://paperswithcode.com/paper/elicitation-complexity-of-statistical
Elicitation Complexity of Statistical Properties
A property, or statistical functional, is said to be elicitable if it minimizes expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limitations of point estimation and empirical risk minimization. While recent work asks which properties are elicitable, we instead advocate for a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of the property. We lay the foundation for a general theory of elicitation complexity, including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest. Building on this foundation, our main result gives tight complexity bounds for the broad class of Bayes risks. We apply these results to several properties of interest, including variance, entropy, norms, and several classes of financial risk measures. We conclude with discussion and open directions.
1506.07212
https://arxiv.org/abs/1506.07212v3
https://arxiv.org/pdf/1506.07212v3.pdf
[]
[]
[]
ukjBCw48S-
https://paperswithcode.com/paper/why-so-gloomy-a-bayesian-explanation-of-human
Why so gloomy? A Bayesian explanation of human pessimism bias in the multi-armed bandit task
How humans make repeated choices among options with imperfectly known reward outcomes is an important problem in psychology and neuroscience. This is often studied using multi-armed bandits, which is also frequently studied in machine learning. We present data from a human stationary bandit experiment, in which we vary the average abundance and variability of reward availability (mean and variance of reward rate distributions). Surprisingly, we find subjects significantly underestimate prior mean of reward rates -- based on their self-report, at the end of a game, on their reward expectation of non-chosen arms. Previously, human learning in the bandit task was found to be well captured by a Bayesian ideal learning model, the Dynamic Belief Model (DBM), albeit under an incorrect generative assumption of the temporal structure - humans assume reward rates can change over time even though they are actually fixed. We find that the "pessimism bias" in the bandit task is well captured by the prior mean of DBM when fitted to human choices; but it is poorly captured by the prior mean of the Fixed Belief Model (FBM), an alternative Bayesian model that (correctly) assumes reward rates to be constants. This pessimism bias is also incompletely captured by a simple reinforcement learning model (RL) commonly used in neuroscience and psychology, in terms of fitted initial Q-values. While it seems sub-optimal, and thus mysterious, that humans have an underestimated prior reward expectation, our simulations show that an underestimated prior mean helps to maximize long-term gain, if the observer assumes volatility when reward rates are stable and utilizes a softmax decision policy instead of the optimal one (obtainable by dynamic programming). This raises the intriguing possibility that the brain underestimates reward rates to compensate for the incorrect non-stationarity assumption in the generative model and a simplified decision policy.
null
http://papers.nips.cc/paper/7764-why-so-gloomy-a-bayesian-explanation-of-human-pessimism-bias-in-the-multi-armed-bandit-task
http://papers.nips.cc/paper/7764-why-so-gloomy-a-bayesian-explanation-of-human-pessimism-bias-in-the-multi-armed-bandit-task.pdf
[ "Multi-Armed Bandits" ]
[ "Softmax" ]
[]
u9ewEjzoTN
https://paperswithcode.com/paper/analyze-and-development-system-with-multiple
Analyze and Development System with Multiple Biometric Identification
Cause of a rapid increase in technological development, increasing identity theft, consumer fraud, the threat to personal data is also increasing every day. Methods developed earlier to ensure personal the information from the thefts was not effective and safe. Biometrics were introduced when it was needed technology for more efficient security of personal information. Old-fashioned traditional approaches like Personal identification number( PIN), passwords, keys, login ID can be forgotten, stolen or lost. In biometric authentication system, user may not remember any passwords or carry any keys. As people they recognize each other by the physical appearance and behavioral characteristics that biometric systems use physical characteristics, such as fingerprints, facial recognition, voice recognition, in order to distinguish between the actual user and scammer. In order to increase safety in 2005, biometric identification methods were developed government and business sectors, but today it has reached almost all private sectors as Banking, Finance, home security and protection, healthcare, business security and security etc. Since biometric samples and templates of a biometric system having one biometric character to detect and the user can be replaced and duplicated, the new idea of merging multiple biometric identification technologies has so-called multimodal biometric recognition systems have been introduced that use two or more biometric data characteristics of the individual that can be identified as a real user or not.
2004.04911
https://arxiv.org/abs/2004.04911v1
https://arxiv.org/pdf/2004.04911v1.pdf
[]
[]
[]
YnjrQGDnfH
https://paperswithcode.com/paper/a-discriminative-framework-for-anomaly
A Discriminative Framework for Anomaly Detection in Large Videos
We address an anomaly detection setting in which training sequences are unavailable and anomalies are scored independently of temporal ordering. Current algorithms in anomaly detection are based on the classical density estimation approach of learning high-dimensional models and finding low-probability events. These algorithms are sensitive to the order in which anomalies appear and require either training data or early context assumptions that do not hold for longer, more complex videos. By defining anomalies as examples that can be distinguished from other examples in the same video, our definition inspires a shift in approaches from classical density estimation to simple discriminative learning. Our contributions include a novel framework for anomaly detection that is (1) independent of temporal ordering of anomalies, and (2) unsupervised, requiring no separate training sequences. We show that our algorithm can achieve state-of-the-art results even when we adjust the setting by removing training sequences from standard datasets.
1609.08938
http://arxiv.org/abs/1609.08938v1
http://arxiv.org/pdf/1609.08938v1.pdf
[ "Anomaly Detection", "Density Estimation" ]
[]
[]
e9xzlFs5zp
https://paperswithcode.com/paper/decamouflage-a-framework-to-detect-image
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
As an essential processing step in computer vision applications, image resizing or scaling, more specifically downsampling, has to be applied before feeding a normally large image into a convolutional neural network (CNN) model because CNN models typically take small fixed-size images as inputs. However, image scaling functions could be adversarially abused to perform a newly revealed attack called image-scaling attack, which can affect a wide range of computer vision applications building upon image-scaling functions. This work presents an image-scaling attack detection framework, termed as Decamouflage. Decamouflage consists of three independent detection methods: (1) rescaling, (2) filtering/pooling, and (3) steganalysis. While each of these three methods is efficient standalone, they can work in an ensemble manner not only to improve the detection accuracy but also to harden potential adaptive attacks. Decamouflage has a pre-determined detection threshold that is generic. More precisely, as we have validated, the threshold determined from one dataset is also applicable to other different datasets. Extensive experiments show that Decamouflage achieves detection accuracy of 99.9\% and 99.8\% in the white-box (with the knowledge of attack algorithms) and the black-box (without the knowledge of attack algorithms) settings, respectively. To corroborate the efficiency of Decamouflage, we have also measured its run-time overhead on a personal PC with an i5 CPU and found that Decamouflage can detect image-scaling attacks in milliseconds. Overall, Decamouflage can accurately detect image scaling attacks in both white-box and black-box settings with acceptable run-time overhead.
2010.03735
https://arxiv.org/abs/2010.03735v1
https://arxiv.org/pdf/2010.03735v1.pdf
[]
[]
[]
ddRaWXa4ls
https://paperswithcode.com/paper/deepdownscale-a-deep-learning-strategy-for
DeepDownscale: a Deep Learning Strategy for High-Resolution Weather Forecast
Running high-resolution physical models is computationally expensive and essential for many disciplines. Agriculture, transportation, and energy are sectors that depend on high-resolution weather models, which typically consume many hours of large High Performance Computing (HPC) systems to deliver timely results. Many users cannot afford to run the desired resolution and are forced to use low resolution output. One simple solution is to interpolate results for visualization. It is also possible to combine an ensemble of low resolution models to obtain a better prediction. However, these approaches fail to capture the redundant information and patterns in the low-resolution input that could help improve the quality of prediction. In this paper, we propose and evaluate a strategy based on a deep neural network to learn a high-resolution representation from low-resolution predictions using weather forecast as a practical use case. We take a supervised learning approach, since obtaining labeled data can be done automatically. Our results show significant improvement when compared with standard practices and the strategy is still lightweight enough to run on modest computer systems.
1808.05264
http://arxiv.org/abs/1808.05264v1
http://arxiv.org/pdf/1808.05264v1.pdf
[]
[]
[]
s45bQ5bXHC
https://paperswithcode.com/paper/twowingos-a-two-wing-optimization-strategy
TwoWingOS: A Two-Wing Optimization Strategy for Evidential Claim Verification
Determining whether a given claim is supported by evidence is a fundamental NLP problem that is best modeled as Textual Entailment. However, given a large collection of text, finding evidence that could support or refute a given claim is a challenge in itself, amplified by the fact that different evidence might be needed to support or refute a claim. Nevertheless, most prior work decouples evidence identification from determining the truth value of the claim given the evidence. We propose to consider these two aspects jointly. We develop TwoWingOS (two-wing optimization strategy), a system that, while identifying appropriate evidence for a claim, also determines whether or not the claim is supported by the evidence. Given the claim, TwoWingOS attempts to identify a subset of the evidence candidates; given the predicted evidence, it then attempts to determine the truth value of the corresponding claim. We treat this challenge as coupled optimization problems, training a joint model for it. TwoWingOS offers two advantages: (i) Unlike pipeline systems, it facilitates flexible-size evidence set, and (ii) Joint training improves both the claim entailment and the evidence identification. Experiments on a benchmark dataset show state-of-the-art performance. Code: https://github.com/yinwenpeng/FEVER
1808.03465
http://arxiv.org/abs/1808.03465v2
http://arxiv.org/pdf/1808.03465v2.pdf
[ "Natural Language Inference" ]
[]
[]
GNcq_05twd
https://paperswithcode.com/paper/theory-iiib-generalization-in-deep-networks
Theory IIIb: Generalization in Deep Networks
A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of "overfitting", defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, "slow" convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the weight matrix at each layer of a deep network converges to a minimum norm solution up to a scale factor (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for ranking the generalization performance of different zero minimizers of the empirical loss.
1806.11379
http://arxiv.org/abs/1806.11379v1
http://arxiv.org/pdf/1806.11379v1.pdf
[]
[]
[]
eiMYDX-VZQ
https://paperswithcode.com/paper/improved-local-search-for-graph-edit-distance
Improved local search for graph edit distance
The graph edit distance (GED) measures the dissimilarity between two graphs as the minimal cost of a sequence of elementary operations transforming one graph into another. This measure is fundamental in many areas such as structural pattern recognition or classification. However, exactly computing GED is NP-hard. Among different classes of heuristic algorithms that were proposed to compute approximate solutions, local search based algorithms provide the tightest upper bounds for GED. In this paper, we present K-REFINE and RANDPOST. K-REFINE generalizes and improves an existing local search algorithm and performs particularly well on small graphs. RANDPOST is a general warm start framework that stochastically generates promising initial solutions to be used by any local search based GED algorithm. It is particularly efficient on large graphs. An extensive empirical evaluation demonstrates that both K-REFINE and RANDPOST perform excellently in practice.
1907.02929
https://arxiv.org/abs/1907.02929v2
https://arxiv.org/pdf/1907.02929v2.pdf
[]
[]
[]
proR34eljv
https://paperswithcode.com/paper/edge-based-blur-kernel-estimation-using
Edge-Based Blur Kernel Estimation Using Sparse Representation and Self-Similarity
Blind image deconvolution is the problem of recovering the latent image from the only observed blurry image when the blur kernel is unknown. In this paper, we propose an edge-based blur kernel estimation method for blind motion deconvolution. In our previous work, we incorporate both sparse representation and self-similarity of image patches as priors into our blind deconvolution model to regularize the recovery of the latent image. Since almost any natural image has properties of sparsity and multi-scale self-similarity, we construct a sparsity regularizer and a cross-scale non-local regularizer based on our patch priors. It has been observed that our regularizers often favor sharp images over blurry ones only for image patches of the salient edges and thus we define an edge mask to locate salient edges that we want to apply our regularizers. Experimental results on both simulated and real blurry images demonstrate that our method outperforms existing state-of-the-art blind deblurring methods even for handling of very large blurs, thanks to the use of the edge mask.
1811.07161
http://arxiv.org/abs/1811.07161v1
http://arxiv.org/pdf/1811.07161v1.pdf
[ "Deblurring", "Image Deconvolution" ]
[]
[]
JN5XbPnm1A
https://paperswithcode.com/paper/sum-of-squares-lower-bounds-for-sparse-pca
Sum-of-Squares Lower Bounds for Sparse PCA
This paper establishes a statistical versus computational trade-off for solving a basic high-dimensional machine learning problem via a basic convex relaxation method. Specifically, we consider the {\em Sparse Principal Component Analysis} (Sparse PCA) problem, and the family of {\em Sum-of-Squares} (SoS, aka Lasserre/Parillo) convex relaxations. It was well known that in large dimension $p$, a planted $k$-sparse unit vector can be {\em in principle} detected using only $n \approx k\log p$ (Gaussian or Bernoulli) samples, but all {\em efficient} (polynomial time) algorithms known require $n \approx k^2$ samples. It was also known that this quadratic gap cannot be improved by the the most basic {\em semi-definite} (SDP, aka spectral) relaxation, equivalent to a degree-2 SoS algorithms. Here we prove that also degree-4 SoS algorithms cannot improve this quadratic gap. This average-case lower bound adds to the small collection of hardness results in machine learning for this powerful family of convex relaxation algorithms. Moreover, our design of moments (or "pseudo-expectations") for this lower bound is quite different than previous lower bounds. Establishing lower bounds for higher degree SoS algorithms for remains a challenging problem.
1507.06370
http://arxiv.org/abs/1507.06370v2
http://arxiv.org/pdf/1507.06370v2.pdf
[]
[]
[]
qOhtF7WvRX
https://paperswithcode.com/paper/meteor-20-adopt-syntactic-level-paraphrase
Meteor++ 2.0: Adopt Syntactic Level Paraphrase Knowledge into Machine Translation Evaluation
This paper describes Meteor++ 2.0, our submission to the WMT19 Metric Shared Task. The well known Meteor metric improves machine translation evaluation by introducing paraphrase knowledge. However, it only focuses on the lexical level and utilizes consecutive n-grams paraphrases. In this work, we take into consideration syntactic level paraphrase knowledge, which sometimes may be skip-grams. We describe how such knowledge can be extracted from Paraphrase Database (PPDB) and integrated into Meteor-based metrics. Experiments on WMT15 and WMT17 evaluation datasets show that the newly proposed metric outperforms all previous versions of Meteor.
null
https://www.aclweb.org/anthology/W19-5357/
https://www.aclweb.org/anthology/W19-5357
[ "Machine Translation" ]
[]
[]
p_joItUqH4
https://paperswithcode.com/paper/exact-symbolic-inference-in-probabilistic
Exact Symbolic Inference in Probabilistic Programs via Sum-Product Representations
We present the Sum-Product Probabilistic Language (SPPL), a new system that automatically delivers exact solutions to a broad range of probabilistic inference queries. SPPL symbolically represents the full distribution on execution traces specified by a probabilistic program using a generalization of sum-product networks. SPPL handles continuous and discrete distributions, many-to-one numerical transformations, and a query language that includes general predicates on random variables. We formalize SPPL in terms of a novel translation strategy from probabilistic programs to a semantic domain of sum-product representations, present new algorithms for exactly conditioning on and computing probabilities of queries, and prove their soundness under the semantics. We present techniques for improving the scalability of translation and inference by automatically exploiting conditional independences and repeated structure in SPPL programs. We implement a prototype of SPPL with a modular architecture and evaluate it on a suite of common benchmarks, which establish that our system is up to 3500x faster than state-of-the-art systems for fairness verification; up to 1000x faster than state-of-the-art symbolic algebra techniques; and can compute exact probabilities of rare events in milliseconds.
2010.03485
https://arxiv.org/abs/2010.03485v1
https://arxiv.org/pdf/2010.03485v1.pdf
[ "fairness" ]
[]
[]
Md6DdlhwhP
https://paperswithcode.com/paper/multi-observation-elicitation
Multi-Observation Elicitation
We study loss functions that measure the accuracy of a prediction based on multiple data points simultaneously. To our knowledge, such loss functions have not been studied before in the area of property elicitation or in machine learning more broadly. As compared to traditional loss functions that take only a single data point, these multi-observation loss functions can in some cases drastically reduce the dimensionality of the hypothesis required. In elicitation, this corresponds to requiring many fewer reports; in empirical risk minimization, it corresponds to algorithms on a hypothesis space of much smaller dimension. We explore some examples of the tradeoff between dimensionality and number of observations, give some geometric characterizations and intuition for relating loss functions and the properties that they elicit, and discuss some implications for both elicitation and machine-learning contexts.
1706.01394
http://arxiv.org/abs/1706.01394v1
http://arxiv.org/pdf/1706.01394v1.pdf
[]
[]
[]