paper_id
stringlengths
10
10
paper_url
stringlengths
37
80
title
stringlengths
4
518
abstract
stringlengths
3
7.27k
arxiv_id
stringlengths
9
16
url_abs
stringlengths
18
601
url_pdf
stringlengths
21
601
aspect_tasks
sequence
aspect_methods
sequence
aspect_datasets
sequence
Wkiw3jj7vU
https://paperswithcode.com/paper/monte-carlo-tree-search-with-sampled
Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds
Monte Carlo Tree Search (MCTS), most famously used in game-play artificial intelligence (e.g., the game of Go), is a well-known strategy for constructing approximate solutions to sequential decision problems. Its primary innovation is the use of a heuristic, known as a default policy, to obtain Monte Carlo estimates of downstream values for states in a decision tree. This information is used to iteratively expand the tree towards regions of states and actions that an optimal policy might visit. However, to guarantee convergence to the optimal action, MCTS requires the entire tree to be expanded asymptotically. In this paper, we propose a new technique called Primal-Dual MCTS that utilizes sampled information relaxation upper bounds on potential actions, creating the possibility of "ignoring" parts of the tree that stem from highly suboptimal choices. This allows us to prove that despite converging to a partial decision tree in the limit, the recommended action from Primal-Dual MCTS is optimal. The new approach shows significant promise when used to optimize the behavior of a single driver navigating a graph while operating on a ride-sharing platform. Numerical experiments on a real dataset of 7,000 trips in New Jersey suggest that Primal-Dual MCTS improves upon standard MCTS by producing deeper decision trees and exhibits a reduced sensitivity to the size of the action space.
1704.05963
http://arxiv.org/abs/1704.05963v1
http://arxiv.org/pdf/1704.05963v1.pdf
[ "Game of Go" ]
[]
[]
hcBh31cgRL
https://paperswithcode.com/paper/manifold-for-machine-learning-assurance
Manifold for Machine Learning Assurance
The increasing use of machine-learning (ML) enabled systems in critical tasks fuels the quest for novel verification and validation techniques yet grounded in accepted system assurance principles. In traditional system development, model-based techniques have been widely adopted, where the central premise is that abstract models of the required system provide a sound basis for judging its implementation. We posit an analogous approach for ML systems using an ML technique that extracts from the high-dimensional training data implicitly describing the required system, a low-dimensional underlying structure--a manifold. It is then harnessed for a range of quality assurance tasks such as test adequacy measurement, test input generation, and runtime monitoring of the target ML system. The approach is built on variational autoencoder, an unsupervised method for learning a pair of mutually near-inverse functions between a given high-dimensional dataset and a low-dimensional representation. Preliminary experiments establish that the proposed manifold-based approach, for test adequacy drives diversity in test data, for test generation yields fault-revealing yet realistic test cases, and for runtime monitoring provides an independent means to assess trustability of the target system's output.
2002.03147
https://arxiv.org/abs/2002.03147v1
https://arxiv.org/pdf/2002.03147v1.pdf
[]
[]
[]
Pxib-x5chf
https://paperswithcode.com/paper/sequential-dirichlet-process-mixtures-of
Sequential Dirichlet Process Mixtures of Multivariate Skew t-distributions for Model-based Clustering of Flow Cytometry Data
Flow cytometry is a high-throughput technology used to quantify multiple surface and intracellular markers at the level of a single cell. This enables to identify cell sub-types, and to determine their relative proportions. Improvements of this technology allow to describe millions of individual cells from a blood sample using multiple markers. This results in high-dimensional datasets, whose manual analysis is highly time-consuming and poorly reproducible. While several methods have been developed to perform automatic recognition of cell populations, most of them treat and analyze each sample independently. However, in practice, individual samples are rarely independent (e.g. longitudinal studies). Here, we propose to use a Bayesian nonparametric approach with Dirichlet process mixture (DPM) of multivariate skew $t$-distributions to perform model based clustering of flow-cytometry data. DPM models directly estimate the number of cell populations from the data, avoiding model selection issues, and skew $t$-distributions provides robustness to outliers and non-elliptical shape of cell populations. To accommodate repeated measurements, we propose a sequential strategy relying on a parametric approximation of the posterior. We illustrate the good performance of our method on simulated data, on an experimental benchmark dataset, and on new longitudinal data from the DALIA-1 trial which evaluates a therapeutic vaccine against HIV. On the benchmark dataset, the sequential strategy outperforms all other methods evaluated, and similarly, leads to improved performance on the DALIA-1 data. We have made the method available for the community in the R package NPflow.
1702.04407
http://arxiv.org/abs/1702.04407v4
http://arxiv.org/pdf/1702.04407v4.pdf
[ "Model Selection" ]
[]
[]
Yf8AQq_34Q
https://paperswithcode.com/paper/a-high-quality-and-phonetic-balanced-speech
A high quality and phonetic balanced speech corpus for Vietnamese
This paper presents a high quality Vietnamese speech corpus that can be used for analyzing Vietnamese speech characteristic as well as building speech synthesis models. The corpus consists of 5400 clean-speech utterances spoken by 12 speakers including 6 males and 6 females. The corpus is designed with phonetic balanced in mind so that it can be used for speech synthesis, especially, speech adaptation approaches. Specifically, all speakers utter a common dataset contains 250 phonetic balanced sentences. To increase the variety of speech context, each speaker also utters another 200 non-shared, phonetic-balanced sentences. The speakers are selected to cover a wide range of age and come from different regions of the North of Vietnam. The audios are recorded in a soundproof studio room, they are sampling at 48 kHz, 16 bits PCM, mono channel.
1904.05569
http://arxiv.org/abs/1904.05569v1
http://arxiv.org/pdf/1904.05569v1.pdf
[ "Speech Synthesis" ]
[]
[]
6CQpJ_kzdV
https://paperswithcode.com/paper/state-of-the-art-economic-load-dispatch-of
State-of-the-Art Economic Load Dispatch of Power Systems Using Particle Swarm Optimization
Metaheuristic particle swarm optimization (PSO) algorithm has emerged as one of the most promising optimization techniques in solving highly constrained non-linear and non-convex optimization problems in different areas of electrical engineering. Economic operation of the power system is one of the most important areas of electrical engineering where PSO has been used efficiently in solving various issues of practical systems. In this paper, a comprehensive survey of research works in solving various aspects of economic load dispatch (ELD) problems of power system engineering using different types of PSO algorithms is presented. Five important areas of ELD problems have been identified, and the papers published in the general area of ELD using PSO have been classified into these five sections. These five areas are (i) single objective economic load dispatch, (ii) dynamic economic load dispatch, (iii) economic load dispatch with non-conventional sources, (iv) multi-objective environmental/economic dispatch, and (v) economic load dispatch of microgrids. At the end of each category, a table is provided which describes the main features of the papers in brief. The promising future works are given at the conclusion of the review.
1812.11610
http://arxiv.org/abs/1812.11610v1
http://arxiv.org/pdf/1812.11610v1.pdf
[]
[]
[]
2atVNIPWW3
https://paperswithcode.com/paper/sentiment-analysis-on-speaker-specific-speech
Sentiment Analysis on Speaker Specific Speech Data
Sentiment analysis has evolved over past few decades, most of the work in it revolved around textual sentiment analysis with text mining techniques. But audio sentiment analysis is still in a nascent stage in the research community. In this proposed research, we perform sentiment analysis on speaker discriminated speech transcripts to detect the emotions of the individual speakers involved in the conversation. We analyzed different techniques to perform speaker discrimination and sentiment analysis to find efficient algorithms to perform this task.
1802.06209
http://arxiv.org/abs/1802.06209v1
http://arxiv.org/pdf/1802.06209v1.pdf
[ "Sentiment Analysis" ]
[]
[]
WPYrTBD6rF
https://paperswithcode.com/paper/value-of-information-lattice-exploiting
Value of Information Lattice: Exploiting Probabilistic Independence for Effective Feature Subset Acquisition
We address the cost-sensitive feature acquisition problem, where misclassifying an instance is costly but the expected misclassification cost can be reduced by acquiring the values of the missing features. Because acquiring the features is costly as well, the objective is to acquire the right set of features so that the sum of the feature acquisition cost and misclassification cost is minimized. We describe the Value of Information Lattice (VOILA), an optimal and efficient feature subset acquisition framework. Unlike the common practice, which is to acquire features greedily, VOILA can reason with subsets of features. VOILA efficiently searches the space of possible feature subsets by discovering and exploiting conditional independence properties between the features and it reuses probabilistic inference computations to further speed up the process. Through empirical evaluation on five medical datasets, we show that the greedy strategy is often reluctant to acquire features, as it cannot forecast the benefit of acquiring multiple features in combination.
1401.3881
http://arxiv.org/abs/1401.3881v1
http://arxiv.org/pdf/1401.3881v1.pdf
[]
[]
[]
d5LB6Eik0F
https://paperswithcode.com/paper/segmentation-of-instances-by-hashing
Segmentation of Instances by Hashing
We propose a novel approach to address the Simultaneous Detection and Segmentation problem. Using hierarchical structures we use an efficient and accurate procedure that exploits the hierarchy feature information using Locality Sensitive Hashing. We build on recent work that utilizes convolutional neural networks to detect bounding boxes in an image and then use the top similar hierarchical region that best fits each bounding box after hashing, we call this approach CZ Segmentation. We then refine our final segmentation results by automatic hierarchy pruning. CZ Segmentation introduces a train-free alternative to Hypercolumns. We conduct extensive experiments on PASCAL VOC 2012 segmentation dataset, showing that CZ gives competitive state-of-the-art object segmentations.
1702.08160
http://arxiv.org/abs/1702.08160v9
http://arxiv.org/pdf/1702.08160v9.pdf
[]
[ "Max Pooling", "SVM", "Convolution", "R-CNN" ]
[]
rX15R_qfL4
https://paperswithcode.com/paper/topological-machine-learning-with-persistence
Topological Machine Learning with Persistence Indicator Functions
Techniques from computational topology, in particular persistent homology, are becoming increasingly relevant for data analysis. Their stable metrics permit the use of many distance-based data analysis methods, such as multidimensional scaling, while providing a firm theoretical ground. Many modern machine learning algorithms, however, are based on kernels. This paper presents persistence indicator functions (PIFs), which summarize persistence diagrams, i.e., feature descriptors in topological data analysis. PIFs can be calculated and compared in linear time and have many beneficial properties, such as the availability of a kernel-based similarity measure. We demonstrate their usage in common data analysis scenarios, such as confidence set estimation and classification of complex structured data.
1907.13496
https://arxiv.org/abs/1907.13496v1
https://arxiv.org/pdf/1907.13496v1.pdf
[ "Topological Data Analysis" ]
[]
[]
tS8xlxViZG
https://paperswithcode.com/paper/foundations-of-comparison-based-hierarchical
Foundations of Comparison-Based Hierarchical Clustering
We address the classical problem of hierarchical clustering, but in a framework where one does not have access to a representation of the objects or their pairwise similarities. Instead, we assume that only a set of comparisons between objects is available, that is, statements of the form "objects $i$ and $j$ are more similar than objects $k$ and $l$." Such a scenario is commonly encountered in crowdsourcing applications. The focus of this work is to develop comparison-based hierarchical clustering algorithms that do not rely on the principles of ordinal embedding. We show that single and complete linkage are inherently comparison-based and we develop variants of average linkage. We provide statistical guarantees for the different methods under a planted hierarchical partition model. We also empirically demonstrate the performance of the proposed approaches on several datasets.
1811.00928
https://arxiv.org/abs/1811.00928v2
https://arxiv.org/pdf/1811.00928v2.pdf
[]
[]
[]
2VjnHiMQHj
https://paperswithcode.com/paper/temporal-network-representation-learning
Dynamic Node Embeddings from Edge Streams
Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Such temporal networks (or edge streams) consist of a sequence of timestamped edges and are seemingly ubiquitous. Despite the importance of accurately modeling the temporal information, most embedding methods ignore it entirely or approximate the temporal network using a sequence of static snapshot graphs. In this work, we propose using the notion of temporal walks for learning dynamic embeddings from temporal networks. Temporal walks capture the temporally valid interactions (e.g., flow of information, spread of disease) in the dynamic network in a lossless fashion. Based on the notion of temporal walks, we describe a general class of embeddings called continuous-time dynamic network embeddings (CTDNEs) that completely avoid the issues and problems that arise when approximating the temporal network as a sequence of static snapshot graphs. Unlike previous work, CTDNEs learn dynamic node embeddings directly from the temporal network at the finest temporal granularity and thus use only temporally valid information. As such CTDNEs naturally support online learning of the node embeddings in a streaming real-time fashion. Finally, the experiments demonstrate the effectiveness of this class of embedding methods that leverage temporal walks as it achieves an average gain in AUC of 11.9% across all methods and graphs.
1904.06449
https://arxiv.org/abs/1904.06449v2
https://arxiv.org/pdf/1904.06449v2.pdf
[ "Representation Learning" ]
[]
[]
uGapeBjxer
https://paperswithcode.com/paper/an-oral-history-annotation-tool-for-inter
An Oral History Annotation Tool for INTER-VIEWs
We present a web-based tool for retrieving and annotating audio fragments of e.g. interviews. Our collection contains 250 interviews with veterans of Dutch conflicts and military missions. The audio files of the interviews were disclosed using ASR technology focussed at keyword retrieval. Resulting transcripts were stored in a MySQL database together with metadata, summary texts, and keywords, and carefully indexed. Retrieved fragments can be made audible and annotated. Annotations can be kept personal or be shared with other users. The tool and formats comply with CLARIN standards. A demo version of the tool is available at http://wwwlands2.let.kun.nl/spex/annotationtooldemo.
null
https://www.aclweb.org/anthology/L12-1151/
http://www.lrec-conf.org/proceedings/lrec2012/pdf/320_Paper.pdf
[ "Speech Recognition" ]
[]
[]
qBXgCt9Y8p
https://paperswithcode.com/paper/uniform-concentration-and-symmetrization-for
Uniform concentration and symmetrization for weak interactions
The method to derive uniform bounds with Gaussian and Rademacher complexities is extended to the case where the sample average is replaced by a nonlinear statistic. Tight bounds are obtained for U-statistics, smoothened L-statistics and error functionals of l2-regularized algorithms.
1902.01911
https://arxiv.org/abs/1902.01911v4
https://arxiv.org/pdf/1902.01911v4.pdf
[]
[]
[]
4ewSRQrbVL
https://paperswithcode.com/paper/l0-regularization-based-neural-network-design
L0 Regularization Based Neural Network Design and Compression
We consider complexity of Deep Neural Networks (DNNs) and their associated massive over-parameterization. Such over-parametrization may entail susceptibility to adversarial attacks, loss of interpretability and adverse Size, Weight and Power - Cost (SWaP-C) considerations. We ask if there are methodical ways (regularization) to reduce complexity and how can we interpret trade-off between desired metric and complexity of DNN. Reducing complexity is directly applicable to scaling of AI applications to real world problems (especially for off-the-cloud applications). We show that presence and evaluation of the knee of the tradeoff curve. We apply a form of L0 regularization to MNIST data and signal modulation classifications. We show that such regularization captures saliency in the input space as well.
1905.13652
https://arxiv.org/abs/1905.13652v1
https://arxiv.org/pdf/1905.13652v1.pdf
[]
[]
[]
QlflTQdwNg
https://paperswithcode.com/paper/experiments-with-pos-tagging-code-mixed
Experiments with POS Tagging Code-mixed Indian Social Media Text
This paper presents Centre for Development of Advanced Computing Mumbai's (CDACM) submission to the NLP Tools Contest on Part-Of-Speech (POS) Tagging For Code-mixed Indian Social Media Text (POSCMISMT) 2015 (collocated with ICON 2015). We submitted results for Hindi (hi), Bengali (bn), and Telugu (te) languages mixed with English (en). In this paper, we have described our approaches to the POS tagging techniques, we exploited for this task. Machine learning has been used to POS tag the mixed language text. For POS tagging, distributed representations of words in vector space (word2vec) for feature extraction and Log-linear models have been tried. We report our work on all three languages hi, bn, and te mixed with en.
1610.09799
http://arxiv.org/abs/1610.09799v1
http://arxiv.org/pdf/1610.09799v1.pdf
[ "Part-Of-Speech Tagging" ]
[]
[]
Uc43RCeqGb
https://paperswithcode.com/paper/multi-agent-reinforcement-learning-as-a
Multi-Agent Reinforcement Learning as a Computational Tool for Language Evolution Research: Historical Context and Future Challenges
Computational models of emergent communication in agent populations are currently gaining interest in the machine learning community due to recent advances in Multi-Agent Reinforcement Learning (MARL). Current contributions are however still relatively disconnected from the earlier theoretical and computational literature aiming at understanding how language might have emerged from a prelinguistic substance. The goal of this paper is to position recent MARL contributions within the historical context of language evolution research, as well as to extract from this theoretical and computational background a few challenges for future research.
2002.08878
https://arxiv.org/abs/2002.08878v1
https://arxiv.org/pdf/2002.08878v1.pdf
[ "Multi-agent Reinforcement Learning" ]
[]
[]
g-zFl052Pg
https://paperswithcode.com/paper/re-scale-boosting-for-regression-and
Re-scale boosting for regression and classification
Boosting is a learning scheme that combines weak prediction rules to produce a strong composite estimator, with the underlying intuition that one can obtain accurate prediction rules by combining "rough" ones. Although boosting is proved to be consistent and overfitting-resistant, its numerical convergence rate is relatively slow. The aim of this paper is to develop a new boosting strategy, called the re-scale boosting (RBoosting), to accelerate the numerical convergence rate and, consequently, improve the learning performance of boosting. Our studies show that RBoosting possesses the almost optimal numerical convergence rate in the sense that, up to a logarithmic factor, it can reach the minimax nonlinear approximation rate. We then use RBoosting to tackle both the classification and regression problems, and deduce a tight generalization error estimate. The theoretical and experimental results show that RBoosting outperforms boosting in terms of generalization.
1505.01371
http://arxiv.org/abs/1505.01371v1
http://arxiv.org/pdf/1505.01371v1.pdf
[]
[]
[]
AmYRsUDclu
https://paperswithcode.com/paper/descriptor-ensemble-an-unsupervised-approach
Descriptor Ensemble: An Unsupervised Approach to Descriptor Fusion in the Homography Space
With the aim to improve the performance of feature matching, we present an unsupervised approach to fuse various local descriptors in the space of homographies. Inspired by the observation that the homographies of correct feature correspondences vary smoothly along the spatial domain, our approach stands on the unsupervised nature of feature matching, and can select a good descriptor for matching each feature point. Specifically, the homography space serves as the common domain, in which a correspondence obtained by any descriptor is considered as a point, for integrating various heterogeneous descriptors. Both geometric coherence and spatial continuity among correspondences are considered via computing their geodesic distances in the space. In this way, mutual verification across different descriptors is allowed, and correct correspondences will be highlighted with a high degree of consistency (i.e., short geodesic distances here). It follows that one-class SVM can be applied to identifying these correct correspondences, and boosts the performance of feature matching. The proposed approach is comprehensively compared with the state-of-the-art approaches, and evaluated on four benchmarks of image matching. The promising results manifest its effectiveness.
1412.4196
http://arxiv.org/abs/1412.4196v1
http://arxiv.org/pdf/1412.4196v1.pdf
[]
[]
[]
B1VPE1keU0
https://paperswithcode.com/paper/towards-solving-the-multiple-extension
Towards Solving the Multiple Extension Problem: Combining Defaults and Probabilities
The multiple extension problem arises frequently in diagnostic and default inference. That is, we can often use any of a number of sets of defaults or possible hypotheses to explain observations or make Predictions. In default inference, some extensions seem to be simply wrong and we use qualitative techniques to weed out the unwanted ones. In the area of diagnosis, however, the multiple explanations may all seem reasonable, however improbable. Choosing among them is a matter of quantitative preference. Quantitative preference works well in diagnosis when knowledge is modelled causally. Here we suggest a framework that combines probabilities and defaults in a single unified framework that retains the semantics of diagnosis as construction of explanations from a fixed set of possible hypotheses. We can then compute probabilities incrementally as we construct explanations. Here we describe a branch and bound algorithm that maintains a set of all partial explanations while exploring a most promising one first. A most probable explanation is found first if explanations are partially ordered.
1304.2745
http://arxiv.org/abs/1304.2745v1
http://arxiv.org/pdf/1304.2745v1.pdf
[]
[]
[]
9HRBtNfCFe
https://paperswithcode.com/paper/visual-explanation-by-interpretation
Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks
Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by stridded operations in deconvNet-based visualizations. Moreover, we introduce an8Flower, a dataset specifically designed for objective quantitative evaluation of methods for visual explanation.Experiments on the MNIST,ILSVRC12,Fashion144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest
1712.06302
http://arxiv.org/abs/1712.06302v3
http://arxiv.org/pdf/1712.06302v3.pdf
[]
[]
[]
Awk3Hf6s8B
https://paperswithcode.com/paper/eeg-based-communication-with-a-predictive
EEG-based Communication with a Predictive Text Algorithm
Several changes occur in the brain in response to voluntary and involuntary activities performed by a person. The ability to retrieve data from the brain within a time space provides a basis for in-depth analyses that offer insight on what changes occur in the brain during its decision-making processes. In this work, we present the technical description and software implementation of an electroencephalographic (EEG) based communication system. We read EEG data in real-time with which we compute the likelihood that a voluntary eye blink has been made by a person and use the decision to trigger buttons on a user interface in order to produce text. Relevant texts are suggested using a modification of the T9 algorithm. Our results indicate that EEG-based technology can be effectively applied in facilitating speech for people with severe speech and muscular disabilities, providing a foundation for future work in the area.
1812.05945
https://arxiv.org/abs/1812.05945v4
https://arxiv.org/pdf/1812.05945v4.pdf
[ "Decision Making", "EEG" ]
[]
[]
3NgioPbntc
https://paperswithcode.com/paper/an-enhanced-computational-feature-selection
An enhanced computational feature selection method for medical synonym identification via bilingualism and multi-corpus training
Medical synonym identification has been an important part of medical natural language processing (NLP). However, in the field of Chinese medical synonym identification, there are problems like low precision and low recall rate. To solve the problem, in this paper, we propose a method for identifying Chinese medical synonyms. We first selected 13 features including Chinese and English features. Then we studied the synonym identification results of each feature alone and different combinations of the features. Through the comparison among identification results, we present an optimal combination of features for Chinese medical synonym identification. Experiments show that our selected features have achieved 97.37% precision rate, 96.00% recall rate and 97.33% F1 score.
1812.01879
http://arxiv.org/abs/1812.01879v1
http://arxiv.org/pdf/1812.01879v1.pdf
[ "Feature Selection" ]
[]
[]
__5TnHb413
https://paperswithcode.com/paper/personabank-a-corpus-of-personal-narratives
PersonaBank: A Corpus of Personal Narratives and Their Story Intention Graphs
We present a new corpus, PersonaBank, consisting of 108 personal stories from weblogs that have been annotated with their Story Intention Graphs, a deep representation of the fabula of a story. We describe the topics of the stories and the basis of the Story Intention Graph representation, as well as the process of annotating the stories to produce the Story Intention Graphs and the challenges of adapting the tool to this new personal narrative domain We also discuss how the corpus can be used in applications that retell the story using different styles of tellings, co-tellings, or as a content planner.
1708.09082
http://arxiv.org/abs/1708.09082v1
http://arxiv.org/pdf/1708.09082v1.pdf
[]
[]
[]
OBokwRJ1bL
https://paperswithcode.com/paper/zero-shot-crowd-behavior-recognition
Zero-Shot Crowd Behavior Recognition
Understanding crowd behavior in video is challenging for computer vision. There have been increasing attempts on modeling crowded scenes by introducing ever larger property ontologies (attributes) and annotating ever larger training datasets. However, in contrast to still images, manually annotating video attributes needs to consider spatiotemporal evolution which is inherently much harder and more costly. Critically, the most interesting crowd behaviors captured in surveillance videos (e.g., street fighting, flash mobs) are either rare, thus have few examples for model training, or unseen previously. Existing crowd analysis techniques are not readily scalable to recognize novel (unseen) crowd behaviors. To address this problem, we investigate and develop methods for recognizing visual crowd behavioral attributes without any training samples, i.e., zero-shot learning crowd behavior recognition. To that end, we relax the common assumption that each individual crowd video instance is only associated with a single crowd attribute. Instead, our model learns to jointly recognize multiple crowd behavioral attributes in each video instance by exploring multiattribute cooccurrence as contextual knowledge for optimizing individual crowd attribute recognition. Joint multilabel attribute prediction in zero-shot learning is inherently nontrivial because cooccurrence statistics does not exist for unseen attributes. To solve this problem, we learn to predict cross-attribute cooccurrence from both online text corpus and multilabel annotation of videos with known attributes. Our experiments show that this approach to modeling multiattribute context not only improves zero-shot crowd behavior recognition on the WWW crowd video dataset, but also generalizes to novel behavior (violence) detection cross-domain in the Violence Flow video dataset.
1908.05877
https://arxiv.org/abs/1908.05877v1
https://arxiv.org/pdf/1908.05877v1.pdf
[ "Zero-Shot Learning" ]
[]
[]
dPpGqGkLu_
https://paperswithcode.com/paper/joint-matrix-tensor-factorization-for
Joint Matrix-Tensor Factorization for Knowledge Base Inference
While several matrix factorization (MF) and tensor factorization (TF) models have been proposed for knowledge base (KB) inference, they have rarely been compared across various datasets. Is there a single model that performs well across datasets? If not, what characteristics of a dataset determine the performance of MF and TF models? Is there a joint TF+MF model that performs robustly on all datasets? We perform an extensive evaluation to compare popular KB inference models across popular datasets in the literature. In addition to answering the questions above, we remove a limitation in the standard evaluation protocol for MF models, propose an extension to MF models so that they can better handle out-of-vocabulary (OOV) entity pairs, and develop a novel combination of TF and MF models. We also analyze and explain the results based on models and dataset characteristics. Our best model is robust, and obtains strong results across all datasets.
1706.00637
http://arxiv.org/abs/1706.00637v1
http://arxiv.org/pdf/1706.00637v1.pdf
[]
[]
[]
dWp5m_vnrW
https://paperswithcode.com/paper/predicting-human-generated-bitstreams-using
Predicting human-generated bitstreams using classical and quantum models
A school of thought contends that human decision making exhibits quantum-like logic. While it is not known whether the brain may indeed be driven by actual quantum mechanisms, some researchers suggest that the decision logic is phenomenologically non-classical. This paper develops and implements an empirical framework to explore this view. We emulate binary decision-making using low width, low depth, parameterized quantum circuits. Here, entanglement serves as a resource for pattern analysis in the context of a simple bit-prediction game. We evaluate a hybrid quantum-assisted machine learning strategy where quantum processing is used to detect correlations in the bitstreams while parameter updates and class inference are performed by classical post-processing of measurement results. Simulation results indicate that a family of two-qubit variational circuits is sufficient to achieve the same bit-prediction accuracy as the best traditional classical solution such as neural nets or logistic autoregression. Thus, short of establishing a provable "quantum advantage" in this simple scenario, we give evidence that the classical predictability analysis of a human-generated bitstream can be achieved by small quantum models.
2004.04671
https://arxiv.org/abs/2004.04671v1
https://arxiv.org/pdf/2004.04671v1.pdf
[ "Decision Making" ]
[]
[]
oPJetzTPiO
https://paperswithcode.com/paper/review-machine-learning-techniques-for
Review. Machine learning techniques for traffic sign detection
An automatic road sign detection system localizes road signs from within images captured by an on-board camera of a vehicle, and support the driver to properly ride the vehicle. Most existing algorithms include a preprocessing step, feature extraction and detection step. This paper arranges the methods applied to road sign detection into two groups: general machine learning, neural networks. In this review, the issues related to automatic road sign detection are addressed, the popular existing methods developed to tackle the road sign detection problem are reviewed, and a comparison of the features of these methods is composed.
1712.04391
http://arxiv.org/abs/1712.04391v2
http://arxiv.org/pdf/1712.04391v2.pdf
[ "Traffic Sign Detection" ]
[]
[]
pX8naWpi4A
https://paperswithcode.com/paper/joint-person-re-identification-and-camera
Joint Person Re-identification and Camera Network Topology Inference in Multiple Cameras
Person re-identification is the task of recognizing or identifying a person across multiple views in multi-camera networks. Although there has been much progress in person re-identification, person re-identification in large-scale multi-camera networks still remains a challenging task because of the large spatio-temporal uncertainty and high complexity due to a large number of cameras and people. To handle these difficulties, additional information such as camera network topology should be provided, which is also difficult to automatically estimate, unfortunately. In this study, we propose a unified framework which jointly solves both person re-identification and camera network topology inference problems with minimal prior knowledge about the environments. The proposed framework takes general multi-camera network environments into account and can be applied to online person re-identification in large-scale multi-camera networks. In addition, to effectively show the superiority of the proposed framework, we provide a new person re-identification dataset with full annotations, named SLP, captured in the multi-camera network consisting of nine non-overlapping cameras. Experimental results using our person re-identification and public datasets show that the proposed methods are promising for both person re-identification and camera topology inference tasks.
1710.00983
http://arxiv.org/abs/1710.00983v1
http://arxiv.org/pdf/1710.00983v1.pdf
[ "Person Re-Identification" ]
[]
[]
IVah8gdQ4h
https://paperswithcode.com/paper/willump-a-statistically-aware-end-to-end
Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference
Systems for ML inference are widely deployed today, but they typically optimize ML inference workloads using techniques designed for conventional data serving workloads and miss critical opportunities to leverage the statistical nature of ML. In this paper, we present Willump, an optimizer for ML inference that introduces two statistically-motivated optimizations targeting ML applications whose performance bottleneck is feature computation. First, Willump automatically cascades feature computation for classification queries: Willump classifies most data inputs using only high-value, low-cost features selected through empirical observations of ML model performance, improving query performance by up to 5x without statistically significant accuracy loss. Second, Willump accurately approximates ML top-K queries, discarding low-scoring inputs with an automatically constructed approximate model and then ranking the remainder with a more powerful model, improving query performance by up to 10x with minimal accuracy loss. Willump automatically tunes these optimizations' parameters to maximize query performance while meeting an accuracy target. Moreover, Willump complements these statistical optimizations with compiler optimizations to automatically generate fast inference code for ML applications. We show that Willump improves the end-to-end performance of real-world ML inference pipelines curated from major data science competitions by up to 16x without statistically significant loss of accuracy.
1906.01974
https://arxiv.org/abs/1906.01974v3
https://arxiv.org/pdf/1906.01974v3.pdf
[]
[]
[]
IT9uFTBKeN
https://paperswithcode.com/paper/a-causal-and-or-graph-model-for-visibility
A Causal And-Or Graph Model for Visibility Fluent Reasoning in Tracking Interacting Objects
Tracking humans that are interacting with the other subjects or environment remains unsolved in visual tracking, because the visibility of the human of interests in videos is unknown and might vary over time. In particular, it is still difficult for state-of-the-art human trackers to recover complete human trajectories in crowded scenes with frequent human interactions. In this work, we consider the visibility status of a subject as a fluent variable, whose change is mostly attributed to the subject's interaction with the surrounding, e.g., crossing behind another object, entering a building, or getting into a vehicle, etc. We introduce a Causal And-Or Graph (C-AOG) to represent the causal-effect relations between an object's visibility fluent and its activities, and develop a probabilistic graph model to jointly reason the visibility fluent change (e.g., from visible to invisible) and track humans in videos. We formulate this joint task as an iterative search of a feasible causal graph structure that enables fast search algorithm, e.g., dynamic programming method. We apply the proposed method on challenging video sequences to evaluate its capabilities of estimating visibility fluent changes of subjects and tracking subjects of interests over time. Results with comparisons demonstrate that our method outperforms the alternative trackers and can recover complete trajectories of humans in complicated scenarios with frequent human interactions.
1709.05437
http://arxiv.org/abs/1709.05437v2
http://arxiv.org/pdf/1709.05437v2.pdf
[ "Visual Tracking" ]
[]
[]
3PHYATZeA4
https://paperswithcode.com/paper/comparison-of-14-different-families-of
Comparison of 14 different families of classification algorithms on 115 binary datasets
We tested 14 very different classification algorithms (random forest, gradient boosting machines, SVM - linear, polynomial, and RBF - 1-hidden-layer neural nets, extreme learning machines, k-nearest neighbors and a bagging of knn, naive Bayes, learning vector quantization, elastic net logistic regression, sparse linear discriminant analysis, and a boosting of linear classifiers) on 115 real life binary datasets. We followed the Demsar analysis and found that the three best classifiers (random forest, gbm and RBF SVM) are not significantly different from each other. We also discuss that a change of less then 0.0112 in the error rate should be considered as an irrelevant change, and used a Bayesian ANOVA analysis to conclude that with high probability the differences between these three classifiers is not of practical consequence. We also verified the execution time of "standard implementations" of these algorithms and concluded that RBF SVM is the fastest (significantly so) both in training time and in training plus testing time.
1606.00930
http://arxiv.org/abs/1606.00930v1
http://arxiv.org/pdf/1606.00930v1.pdf
[ "Quantization" ]
[ "SVM" ]
[]
OY7q8X_ObW
https://paperswithcode.com/paper/partially-linear-additive-gaussian-graphical
Partially Linear Additive Gaussian Graphical Models
We propose a partially linear additive Gaussian graphical model (PLA-GGM) for the estimation of associations between random variables distorted by observed confounders. Model parameters are estimated using an $L_1$-regularized maximal pseudo-profile likelihood estimator (MaPPLE) for which we prove $\sqrt{n}$-sparsistency. Importantly, our approach avoids parametric constraints on the effects of confounders on the estimated graphical model structure. Empirically, the PLA-GGM is applied to both synthetic and real-world datasets, demonstrating superior performance compared to competing methods.
1906.03362
https://arxiv.org/abs/1906.03362v1
https://arxiv.org/pdf/1906.03362v1.pdf
[]
[]
[]
QcSUzjWmlO
https://paperswithcode.com/paper/direct-shape-regression-networks-for-end-to
Direct Shape Regression Networks for End-to-End Face Alignment
Face alignment has been extensively studied in computer vision community due to its fundamental role in facial analysis, but it remains an unsolved problem. The major challenges lie in the highly nonlinear relationship between face images and associated facial shapes, which is coupled by underlying correlation of landmarks. Existing methods mainly rely on cascaded regression, suffering from intrinsic shortcomings, e.g., strong dependency on initialization and failure to exploit landmark correlations. In this paper, we propose the direct shape regression network (DSRN) for end-to-end face alignment by jointly handling the aforementioned challenges in a unified framework. Specifically, by deploying doubly convolutional layer and by using the Fourier feature pooling layer proposed in this paper, DSRN efficiently constructs strong representations to disentangle highly nonlinear relationships between images and shapes; by incorporating a linear layer of low-rank learning, DSRN effectively encodes correlations of landmarks to improve performance. DSRN leverages the strengths of kernels for nonlinear feature extraction and neural networks for structured prediction, and provides the first end-to-end learning architecture for direct face alignment. Its effectiveness and generality are validated by extensive experiments on five benchmark datasets, including AFLW, 300W, CelebA, MAFL, and 300VW. All empirical results demonstrate that DSRN consistently produces high performance and in most cases surpasses state-of-the-art.
null
http://openaccess.thecvf.com/content_cvpr_2018/html/Miao_Direct_Shape_Regression_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Miao_Direct_Shape_Regression_CVPR_2018_paper.pdf
[ "Face Alignment", "Structured Prediction" ]
[ "Linear Layer" ]
[]
yvT9gVprKd
https://paperswithcode.com/paper/can-machine-learning-identify-interesting
Can machine learning identify interesting mathematics? An exploration using empirically observed laws
We explore the possibility of using machine learning to identify interesting mathematical structures by using certain quantities that serve as fingerprints. In particular, we extract features from integer sequences using two empirical laws: Benford's law and Taylor's law and experiment with various classifiers to identify whether a sequence is, for example, nice, important, multiplicative, easy to compute or related to primes or palindromes.
1805.07431
http://arxiv.org/abs/1805.07431v3
http://arxiv.org/pdf/1805.07431v3.pdf
[]
[]
[]
FTdSWaIbjW
https://paperswithcode.com/paper/trident-efficient-4pc-framework-for-privacy
Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning
Machine learning has started to be deployed in fields such as healthcare and finance, which propelled the need for and growth of privacy-preserving machine learning (PPML). We propose an actively secure four-party protocol (4PC), and a framework for PPML, showcasing its applications on four of the most widely-known machine learning algorithms -- Linear Regression, Logistic Regression, Neural Networks, and Convolutional Neural Networks. Our 4PC protocol tolerating at most one malicious corruption is practically efficient as compared to the existing works. We use the protocol to build an efficient mixed-world framework (Trident) to switch between the Arithmetic, Boolean, and Garbled worlds. Our framework operates in the offline-online paradigm over rings and is instantiated in an outsourced setting for machine learning. Also, we propose conversions especially relevant to privacy-preserving machine learning. The highlights of our framework include using a minimal number of expensive circuits overall as compared to ABY3. This can be seen in our technique for truncation, which does not affect the online cost of multiplication and removes the need for any circuits in the offline phase. Our B2A conversion has an improvement of $\mathbf{7} \times$ in rounds and $\mathbf{18} \times$ in the communication complexity. In addition to these, all of the special conversions for machine learning, e.g. Secure Comparison, achieve constant round complexity. The practicality of our framework is argued through improvements in the benchmarking of the aforementioned algorithms when compared with ABY3. All the protocols are implemented over a 64-bit ring in both LAN and WAN settings. Our improvements go up to $\mathbf{187} \times$ for the training phase and $\mathbf{158} \times$ for the prediction phase when observed over LAN and WAN.
1912.02631
https://arxiv.org/abs/1912.02631v1
https://arxiv.org/pdf/1912.02631v1.pdf
[]
[ "Linear Regression", "Logistic Regression" ]
[]
MfA867sOzX
https://paperswithcode.com/paper/estimator-vectors-oov-word-embeddings-based
Estimator Vectors: OOV Word Embeddings based on Subword and Context Clue Estimates
Semantic representations of words have been successfully extracted from unlabeled corpuses using neural network models like word2vec. These representations are generally high quality and are computationally inexpensive to train, making them popular. However, these approaches generally fail to approximate out of vocabulary (OOV) words, a task humans can do quite easily, using word roots and context clues. This paper proposes a neural network model that learns high quality word representations, subword representations, and context clue representations jointly. Learning all three types of representations together enhances the learning of each, leading to enriched word vectors, along with strong estimates for OOV words, via the combination of the corresponding context clue and subword embeddings. Our model, called Estimator Vectors (EV), learns strong word embeddings and is competitive with state of the art methods for OOV estimation.
1910.10491
https://arxiv.org/abs/1910.10491v1
https://arxiv.org/pdf/1910.10491v1.pdf
[ "Word Embeddings" ]
[]
[]
2iycAkw35K
https://paperswithcode.com/paper/customized-nonlinear-bandits-for-online
Customized Nonlinear Bandits for Online Response Selection in Neural Conversation Models
Dialog response selection is an important step towards natural response generation in conversational agents. Existing work on neural conversational models mainly focuses on offline supervised learning using a large set of context-response pairs. In this paper, we focus on online learning of response selection in retrieval-based dialog systems. We propose a contextual multi-armed bandit model with a nonlinear reward function that uses distributed representation of text for online response selection. A bidirectional LSTM is used to produce the distributed representations of dialog context and responses, which serve as the input to a contextual bandit. In learning the bandit, we propose a customized Thompson sampling method that is applied to a polynomial feature space in approximating the reward. Experimental results on the Ubuntu Dialogue Corpus demonstrate significant performance gains of the proposed method over conventional linear contextual bandits. Moreover, we report encouraging response selection performance of the proposed neural bandit model using the Recall@k metric for a small set of online training samples.
1711.08493
http://arxiv.org/abs/1711.08493v1
http://arxiv.org/pdf/1711.08493v1.pdf
[ "Multi-Armed Bandits" ]
[ "Sigmoid Activation", "Tanh Activation", "LSTM" ]
[]
RGnlMdLApk
https://paperswithcode.com/paper/lightweight-and-unobtrusive-privacy
Lightweight and Unobtrusive Data Obfuscation at IoT Edge for Remote Inference
Executing deep neural networks for inference on the server-class or cloud backend based on data generated at the edge of Internet of Things is desirable due primarily to the limited compute power of edge devices and the need to protect the confidentiality of the inference neural networks. However, such a remote inference scheme incurs concerns regarding the privacy of the inference data transmitted by the edge devices to the curious backend. This paper presents a lightweight and unobtrusive approach to obfuscate the inference data at the edge devices. It is lightweight in that the edge device only needs to execute a small-scale neural network; it is unobtrusive in that the edge device does not need to indicate whether obfuscation is applied. Extensive evaluation by three case studies of free spoken digit recognition, handwritten digit recognition, and American sign language recognition shows that our approach effectively protects the confidentiality of the raw forms of the inference data while effectively preserving the backend's inference accuracy.
1912.09859
https://arxiv.org/abs/1912.09859v3
https://arxiv.org/pdf/1912.09859v3.pdf
[ "Handwritten Digit Recognition", "Sign Language Recognition" ]
[]
[]
vnD0rmezSX
https://paperswithcode.com/paper/image-based-vehicle-analysis-using-deep
Image-based Vehicle Analysis using Deep Neural Network: A Systematic Study
We address the vehicle detection and classification problems using Deep Neural Networks (DNNs) approaches. Here we answer to questions that are specific to our application including how to utilize DNN for vehicle detection, what features are useful for vehicle classification, and how to extend a model trained on a limited size dataset, to the cases of extreme lighting condition. Answering these questions we propose our approach that outperforms state-of-the-art methods, and achieves promising results on image with extreme lighting conditions.
1601.01145
http://arxiv.org/abs/1601.01145v2
http://arxiv.org/pdf/1601.01145v2.pdf
[]
[]
[]
QlLRGybxBN
https://paperswithcode.com/paper/dataset2vec-learning-dataset-meta-features
Dataset2Vec: Learning Dataset Meta-Features
Meta-learning is a machine learning approach that utilizes prior learning experiences to expedite the learning process on unseen tasks. For example, after having chosen hyperparameters for dozens of different learning tasks, one would like to learn how to choose them for the next task at hand. As a data-driven approach, meta-learning requires meta-features that represent the primary learning tasks or datasets. Traditionally, a fixed set of dataset statistics is engineered by domain experts to represent such a learning task or dataset. More recently, autoencoders have been employed to learn meta-features. Both approaches are heavily limited: the set of engineered dataset meta-features is limited in expressivity, while the autoencoder based meta-feature extractors are limited to datasets sharing the same schema. In this paper we propose a meta-feature extractor called Dataset2Vec that combines the versatility of engineered dataset meta-features with the expressivity of meta-features learned by deep neural networks. Primary learning tasks or datasets are represented as hierarchical sets, i.e. as a set of predictor/target pairs, and then a DeepSet architecture is employed to regress meta-features on them. As most meta-learning tasks have only a limited number of meta-instances and thus learning such a meta-feature extractor from a limited data foundation would be difficult, we propose a novel auxiliary meta-learning task with abundant data called dataset similarity learning that aims to predict if two batches stem from the same dataset or different ones. In an experiment on a large-scale hyperparameter optimization task for 97 UCI datasets with varying schemas as a meta-learning task, we show that the meta-features of Dataset2Vec outperform the expert engineered meta-features and thus demonstrate the usefulness of learned meta-features for datasets with varying schemas for the first time.
1905.11063
https://arxiv.org/abs/1905.11063v3
https://arxiv.org/pdf/1905.11063v3.pdf
[ "Auxiliary Learning", "Few-Shot Learning", "Hyperparameter Optimization", "Meta-Learning" ]
[]
[]
WvzfAV9tm7
https://paperswithcode.com/paper/variational-bayes-approximations-for
Variational Bayes Approximations for Clustering via Mixtures of Normal Inverse Gaussian Distributions
Parameter estimation for model-based clustering using a finite mixture of normal inverse Gaussian (NIG) distributions is achieved through variational Bayes approximations. Univariate NIG mixtures and multivariate NIG mixtures are considered. The use of variational Bayes approximations here is a substantial departure from the traditional EM approach and alleviates some of the associated computational complexities and uncertainties. Our variational algorithm is applied to simulated and real data. The paper concludes with discussion and suggestions for future work.
1309.1901
http://arxiv.org/abs/1309.1901v1
http://arxiv.org/pdf/1309.1901v1.pdf
[]
[]
[]
UzCZg8NED6
https://paperswithcode.com/paper/a-hybrid-coa-dea-method-for-solving-multi
A hybrid COA-DEA method for solving multi-objective problems
The Cuckoo optimization algorithm (COA) is developed for solving single-objective problems and it cannot be used for solving multi-objective problems. So the multi-objective cuckoo optimization algorithm based on data envelopment analysis (DEA) is developed in this paper and it can gain the efficient Pareto frontiers. This algorithm is presented by the CCR model of DEA and the output-oriented approach of it. The selection criterion is higher efficiency for next iteration of the proposed hybrid method. So the profit function of the COA is replaced by the efficiency value that is obtained from DEA. This algorithm is compared with other methods using some test problems. The results shows using COA and DEA approach for solving multi-objective problems increases the speed and the accuracy of the generated solutions.
1509.00595
http://arxiv.org/abs/1509.00595v1
http://arxiv.org/pdf/1509.00595v1.pdf
[]
[]
[]
EWszwxXZ62
https://paperswithcode.com/paper/computing-equilibria-in-binary-networked
Computing Equilibria in Binary Networked Public Goods Games
Public goods games study the incentives of individuals to contribute to a public good and their behaviors in equilibria. In this paper, we examine a specific type of public goods game where players are networked and each has binary actions, and focus on the algorithmic aspects of such games. First, we show that checking the existence of a pure-strategy Nash equilibrium is NP-Complete. We then identify tractable instances based on restrictions of either utility functions or of the underlying graphical structure. In certain cases, we also show that we can efficiently compute a socially optimal Nash equilibrium. Finally, we propose a heuristic approach for computing approximate equilibria in general binary networked public goods games, and experimentally demonstrate its effectiveness.
1911.05788
https://arxiv.org/abs/1911.05788v1
https://arxiv.org/pdf/1911.05788v1.pdf
[]
[]
[]
VqzeBEI8U9
https://paperswithcode.com/paper/on-understanding-and-machine-understanding
On Understanding and Machine Understanding
In the present paper, we try to propose a self-similar network theory for the basic understanding. By extending the natural languages to a kind of so called idealy sufficient language, we can proceed a few steps to the investigation of the language searching and the language understanding of AI. Image understanding, and the familiarity of the brain to the surrounding environment are also discussed. Group effects are discussed by addressing the essense of the power of influences, and constructing the influence network of a society. We also give a discussion of inspirations.
1103.5034
http://arxiv.org/abs/1103.5034v2
http://arxiv.org/pdf/1103.5034v2.pdf
[]
[]
[]
4XO4oO8G7j
https://paperswithcode.com/paper/on-the-metrics-and-adaptation-methods-for
On the Metrics and Adaptation Methods for Domain Divergences of sEMG-based Gesture Recognition
We propose a new metric to measure domain divergence and a new domain adaptation method for time-series classification. The metric belongs to the class of probability distributions-based metrics, is transductive, and does not assume the presence of source data samples. The 2-stage method utilizes an improved autoregressive, RNN-based architecture with deep/non-linear transformation. We assess our metric and the performance of our model in the context of sEMG/EMG-based gesture recognition under inter-session and inter-subject domain shifts.
1912.08914
https://arxiv.org/abs/1912.08914v1
https://arxiv.org/pdf/1912.08914v1.pdf
[ "Domain Adaptation", "Gesture Recognition", "Time Series", "Time Series Classification" ]
[]
[]
yPMaaBrtyi
https://paperswithcode.com/paper/seeding-the-initial-population-of-multi
Seeding the Initial Population of Multi-Objective Evolutionary Algorithms: A Computational Study
Most experimental studies initialize the population of evolutionary algorithms with random genotypes. In practice, however, optimizers are typically seeded with good candidate solutions either previously known or created according to some problem-specific method. This "seeding" has been studied extensively for single-objective problems. For multi-objective problems, however, very little literature is available on the approaches to seeding and their individual benefits and disadvantages. In this article, we are trying to narrow this gap via a comprehensive computational study on common real-valued test functions. We investigate the effect of two seeding techniques for five algorithms on 48 optimization problems with 2, 3, 4, 6, and 8 objectives. We observe that some functions (e.g., DTLZ4 and the LZ family) benefit significantly from seeding, while others (e.g., WFG) profit less. The advantage of seeding also depends on the examined algorithm.
1412.0307
http://arxiv.org/abs/1412.0307v1
http://arxiv.org/pdf/1412.0307v1.pdf
[]
[]
[]
HV26JzLjPG
https://paperswithcode.com/paper/190501996
Neural Machine Translation with Recurrent Highway Networks
Recurrent Neural Networks have lately gained a lot of popularity in language modelling tasks, especially in neural machine translation(NMT). Very recent NMT models are based on Encoder-Decoder, where a deep LSTM based encoder is used to project the source sentence to a fixed dimensional vector and then another deep LSTM decodes the target sentence from the vector. However there has been very little work on exploring architectures that have more than one layer in space(i.e. in each time step). This paper examines the effectiveness of the simple Recurrent Highway Networks(RHN) in NMT tasks. The model uses Recurrent Highway Neural Network in encoder and decoder, with attention .We also explore the reconstructor model to improve adequacy. We demonstrate the effectiveness of all three approaches on the IWSLT English-Vietnamese dataset. We see that RHN performs on par with LSTM based models and even better in some cases.We see that deep RHN models are easy to train compared to deep LSTM based models because of highway connections. The paper also investigates the effects of increasing recurrent depth in each time step.
1905.01996
http://arxiv.org/abs/1905.01996v1
http://arxiv.org/pdf/1905.01996v1.pdf
[ "Language Modelling", "Machine Translation" ]
[ "Sigmoid Activation", "Tanh Activation", "LSTM" ]
[]
WWM9tm4mha
https://paperswithcode.com/paper/active-learning-by-greedy-split-and-label
Active Learning by Greedy Split and Label Exploration
Annotating large unlabeled datasets can be a major bottleneck for machine learning applications. We introduce a scheme for inferring labels of unlabeled data at a fraction of the cost of labeling the entire dataset. We refer to the scheme as greedy split and label exploration (GSAL). GSAL greedily queries an oracle (or human labeler) and partitions a dataset to find data subsets that have mostly the same label. GSAL can then infer labels by majority vote of the known labels in each subset. GSAL makes the decision to split or label from a subset by maximizing a lower bound on the expected number of correctly labeled examples. GSAL improves upon existing hierarchical labeling schemes by using supervised models to partition the data, therefore avoiding reliance on unsupervised clustering methods that may not accurately group data by label. We design GSAL with strategies to avoid bias that could be introduced through this adaptive partitioning. We evaluate GSAL on labeling of three datasets and find that it outperforms existing strategies for adaptive labeling.
1906.07046
https://arxiv.org/abs/1906.07046v1
https://arxiv.org/pdf/1906.07046v1.pdf
[ "Active Learning" ]
[]
[]
AikB9i6dyD
https://paperswithcode.com/paper/an-empirical-study-of-pretrained
An empirical study of pretrained representations for few-shot classification
Recent algorithms with state-of-the-art few-shot classification results start their procedure by computing data features output by a large pretrained model. In this paper we systematically investigate which models provide the best representations for a few-shot image classification task when pretrained on the Imagenet dataset. We test their representations when used as the starting point for different few-shot classification algorithms. We observe that models trained on a supervised classification task have higher performance than models trained in an unsupervised manner even when transferred to out-of-distribution datasets. Models trained with adversarial robustness transfer better, while having slightly lower accuracy than supervised models.
1910.01319
https://arxiv.org/abs/1910.01319v1
https://arxiv.org/pdf/1910.01319v1.pdf
[ "Few-Shot Image Classification", "Image Classification" ]
[]
[]
nOzi_y9MBU
https://paperswithcode.com/paper/data-science-with-vadalog-bridging-machine
Data Science with Vadalog: Bridging Machine Learning and Reasoning
Following the recent successful examples of large technology companies, many modern enterprises seek to build knowledge graphs to provide a unified view of corporate knowledge and to draw deep insights using machine learning and logical reasoning. There is currently a perceived disconnect between the traditional approaches for data science, typically based on machine learning and statistical modelling, and systems for reasoning with domain knowledge. In this paper we present a state-of-the-art Knowledge Graph Management System, Vadalog, which delivers highly expressive and efficient logical reasoning and provides seamless integration with modern data science toolkits, such as the Jupyter platform. We demonstrate how to use Vadalog to perform traditional data wrangling tasks, as well as complex logical and probabilistic reasoning. We argue that this is a significant step forward towards combining machine learning and reasoning in data science.
1807.08712
http://arxiv.org/abs/1807.08712v1
http://arxiv.org/pdf/1807.08712v1.pdf
[ "Knowledge Graphs" ]
[]
[]
gjXb3j9kph
https://paperswithcode.com/paper/ntnu-1scienceie-at-semeval-2017-task-10
NTNU-1@ScienceIE at SemEval-2017 Task 10: Identifying and Labelling Keyphrases with Conditional Random Fields
We present NTNU{'}s systems for Task A (prediction of keyphrases) and Task B (labelling as Material, Process or Task) at SemEval 2017 Task 10: Extracting Keyphrases and Relations from Scientific Publications (Augenstein et al., 2017). Our approach relies on supervised machine learning using Conditional Random Fields. Our system yields a micro F-score of 0.34 for Tasks A and B combined on the test data. For Task C (relation extraction), we relied on an independently developed system described in (Barik and Marsi, 2017). For the full Scenario 1 (including relations), our approach reaches a micro F-score of 0.33 (5th place). Here we describe our systems, report results and discuss errors.
null
https://www.aclweb.org/anthology/S17-2162/
https://www.aclweb.org/anthology/S17-2162
[ "Dependency Parsing", "Named Entity Recognition", "Relation Extraction" ]
[]
[]
cmrPass8ap
https://paperswithcode.com/paper/random-bits-regression-a-strong-general
Random Bits Regression: a Strong General Predictor for Big Data
To improve accuracy and speed of regressions and classifications, we present a data-based prediction method, Random Bits Regression (RBR). This method first generates a large number of random binary intermediate/derived features based on the original input matrix, and then performs regularized linear/logistic regression on those intermediate/derived features to predict the outcome. Benchmark analyses on a simulated dataset, UCI machine learning repository datasets and a GWAS dataset showed that RBR outperforms other popular methods in accuracy and robustness. RBR (available on https://sourceforge.net/projects/rbr/) is very fast and requires reasonable memories, therefore, provides a strong, robust and fast predictor in the big data era.
1501.02990
http://arxiv.org/abs/1501.02990v1
http://arxiv.org/pdf/1501.02990v1.pdf
[]
[]
[]
0emcR8yOBb
https://paperswithcode.com/paper/characterizing-diseases-and-disorders-in-gay
Characterizing Diseases and disorders in Gay Users' tweets
A lack of information exists about the health issues of lesbian, gay, bisexual, transgender, and queer (LGBTQ) people who are often excluded from national demographic assessments, health studies, and clinical trials. As a result, medical experts and researchers lack a holistic understanding of the health disparities facing these populations. Fortunately, publicly available social media data such as Twitter data can be utilized to support the decisions of public health policy makers and managers with respect to LGBTQ people. This research employs a computational approach to collect tweets from gay users on health-related topics and model these topics. To determine the nature of health-related information shared by men who have sex with men on Twitter, we collected thousands of tweets from 177 active users. We sampled these tweets using a framework that can be applied to other LGBTQ sub-populations in future research. We found 11 diseases in 7 categories based on ICD 10 that are in line with the published studies and official reports.
1803.09134
http://arxiv.org/abs/1803.09134v1
http://arxiv.org/pdf/1803.09134v1.pdf
[]
[]
[]
PzpGQoSlDd
https://paperswithcode.com/paper/adversarial-feature-learning-in-brain
Adversarial Feature Learning in Brain Interfacing: An Experimental Study on Eliminating Drowsiness Effects
Across- and within-recording variabilities in electroencephalographic (EEG) activity is a major limitation in EEG-based brain-computer interfaces (BCIs). Specifically, gradual changes in fatigue and vigilance levels during long EEG recording durations and BCI system usage bring along significant fluctuations in BCI performances even when these systems are calibrated daily. We address this in an experimental offline study from EEG-based BCI speller usage data acquired for one hour duration. As the main part of our methodological approach, we propose the concept of adversarial invariant feature learning for BCIs as a regularization approach on recently expanding EEG deep learning architectures, to learn nuisance-invariant discriminative features. We empirically demonstrate the feasibility of adversarial feature learning on eliminating drowsiness effects from event related EEG activity features, by using temporal recording block ordering as the source of drowsiness variability.
1907.09540
https://arxiv.org/abs/1907.09540v1
https://arxiv.org/pdf/1907.09540v1.pdf
[ "EEG" ]
[]
[]
nFvfzg0pHw
https://paperswithcode.com/paper/dynamic-adaptation-on-non-stationary-visual
Dynamic Adaptation on Non-Stationary Visual Domains
Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results.
1808.00736
http://arxiv.org/abs/1808.00736v1
http://arxiv.org/pdf/1808.00736v1.pdf
[ "Domain Adaptation", "Semantic Segmentation" ]
[]
[]
jRUwv7mU86
https://paperswithcode.com/paper/valuation-based-systems-for-discrete
Valuation-Based Systems for Discrete Optimization
This paper describes valuation-based systems for representing and solving discrete optimization problems. In valuation-based systems, we represent information in an optimization problem using variables, sample spaces of variables, a set of values, and functions that map sample spaces of sets of variables to the set of values. The functions, called valuations, represent the factors of an objective function. Solving the optimization problem involves using two operations called combination and marginalization. Combination tells us how to combine the factors of the joint objective function. Marginalization is either maximization or minimization. Solving an optimization problem can be simply described as finding the marginal of the joint objective function for the empty set. We state some simple axioms that combination and marginalization need to satisfy to enable us to solve an optimization problem using local computation. For optimization problems, the solution method of valuation-based systems reduces to non-serial dynamic programming. Thus our solution method for VBS can be regarded as an abstract description of dynamic programming. And our axioms can be viewed as conditions that permit the use of dynamic programming.
1304.1121
http://arxiv.org/abs/1304.1121v1
http://arxiv.org/pdf/1304.1121v1.pdf
[]
[]
[]
6oWTfpb7ue
https://paperswithcode.com/paper/pruning-convolutional-neural-networks-for-1
Pruning Convolutional Neural Networks for Image Instance Retrieval
In this work, we focus on the problem of image instance retrieval with deep descriptors extracted from pruned Convolutional Neural Networks (CNN). The objective is to heavily prune convolutional edges while maintaining retrieval performance. To this end, we introduce both data-independent and data-dependent heuristics to prune convolutional edges, and evaluate their performance across various compression rates with different deep descriptors over several benchmark datasets. Further, we present an end-to-end framework to fine-tune the pruned network, with a triplet loss function specially designed for the retrieval task. We show that the combination of heuristic pruning and fine-tuning offers 5x compression rate without considerable loss in retrieval performance.
1707.05455
http://arxiv.org/abs/1707.05455v1
http://arxiv.org/pdf/1707.05455v1.pdf
[ "Image Instance Retrieval" ]
[]
[]
bKLBCIS63f
https://paperswithcode.com/paper/fast-fourier-forecasting-resource-utilisation
Fast-Fourier-Forecasting Resource Utilisation in Distributed Systems
Distributed computing systems often consist of hundreds of nodes, executing tasks with different resource requirements. Efficient resource provisioning and task scheduling in such systems are non-trivial and require close monitoring and accurate forecasting of the state of the system, specifically resource utilisation at its constituent machines. Two challenges present themselves towards these objectives. First, collecting monitoring data entails substantial communication overhead. This overhead can be prohibitively high, especially in networks where bandwidth is limited. Second, forecasting models to predict resource utilisation should be accurate and need to exhibit high inference speed. Mission critical scheduling and resource allocation algorithms use these predictions and rely on their immediate availability. To address the first challenge, we present a communication-efficient data collection mechanism. Resource utilisation data is collected at the individual machines in the system and transmitted to a central controller in batches. Each batch is processed by an adaptive data-reduction algorithm based on Fourier transforms and truncation in the frequency domain. We show that the proposed mechanism leads to a significant reduction in communication overhead while incurring only minimal error and adhering to accuracy guarantees. To address the second challenge, we propose a deep learning architecture using complex Gated Recurrent Units to forecast resource utilisation. This architecture is directly integrated with the above data collection mechanism to improve inference speed of our forecasting model. Using two real-world datasets, we demonstrate the effectiveness of our approach, both in terms of forecasting accuracy and inference speed. Our approach resolves challenges encountered in resource provisioning frameworks and can be applied to other forecasting problems.
2001.04281
https://arxiv.org/abs/2001.04281v3
https://arxiv.org/pdf/2001.04281v3.pdf
[ "Distributed Computing" ]
[]
[]
E8tbyC5L1r
https://paperswithcode.com/paper/learning-sparse-optimal-rule-fit-by-safe
Learning sparse optimal rule fit by safe screening
In this paper, we consider linear prediction models in the form of a sparse linear combination of rules, where a rule is an indicator function defined over a hyperrectangle in the input space. Since the number of all possible rules generated from the training dataset becomes extremely large, it has been difficult to consider all of them when fitting a sparse model. In this paper, we propose Safe Optimal Rule Fit (SORF) as an approach to resolve this problem, which is formulated as a convex optimization problem with sparse regularization. The proposed SORF method utilizes the fact that the set of all possible rules can be represented as a tree. By extending a recently popularized convex optimization technique called safe screening, we develop a novel method for pruning the tree such that pruned nodes are guaranteed to be irrelevant to the prediction model. This approach allows us to efficiently learn a prediction model constructed from an exponentially large number of all possible rules. We demonstrate the usefulness of the proposed method by numerical experiments using several benchmark datasets.
1810.01683
http://arxiv.org/abs/1810.01683v1
http://arxiv.org/pdf/1810.01683v1.pdf
[]
[]
[]
lC6H_m5dUL
https://paperswithcode.com/paper/spotting-the-difference-context-retrieval-and
Spotting the Difference: Context Retrieval and Analysis for Improved Forgery Detection and Localization
As image tampering becomes ever more sophisticated and commonplace, the need for image forensics algorithms that can accurately and quickly detect forgeries grows. In this paper, we revisit the ideas of image querying and retrieval to provide clues to better localize forgeries. We propose a method to perform large-scale image forensics on the order of one million images using the help of an image search algorithm and database to gather contextual clues as to where tampering may have taken place. In this vein, we introduce five new strongly invariant image comparison methods and test their effectiveness under heavy noise, rotation, and color space changes. Lastly, we show the effectiveness of these methods compared to passive image forensics using Nimble [https://www.nist.gov/itl/iad/mig/nimble-challenge], a new, state-of-the-art dataset from the National Institute of Standards and Technology (NIST).
1705.00604
http://arxiv.org/abs/1705.00604v1
http://arxiv.org/pdf/1705.00604v1.pdf
[ "Image Forensics", "Image Retrieval" ]
[]
[]
cJKSLJc85L
https://paperswithcode.com/paper/computationally-efficient-target
Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
1611.03130
http://arxiv.org/abs/1611.03130v1
http://arxiv.org/pdf/1611.03130v1.pdf
[ "Scene Labeling" ]
[]
[]
Uc4HCw9CDj
https://paperswithcode.com/paper/low-complexity-lstm-assisted-bit-flipping
Low-Complexity LSTM-Assisted Bit-Flipping Algorithm for Successive Cancellation List Polar Decoder
Polar codes have attracted much attention in the past decade due to their capacity-achieving performance. The higher decoding capacity is required for 5G and beyond 5G (B5G). Although the cyclic redundancy check (CRC)- assisted successive cancellation list bit-flipping (CA-SCLF) decoders have been developed to obtain a better performance, the solution to error bit correction (bit-flipping) problem is still imperfect and hard to design. In this work, we leverage the expert knowledge in communication systems and adopt deep learning (DL) technique to obtain the better solution. A low-complexity long short-term memory network (LSTM)-assisted CA-SCLF decoder is proposed to further improve the performance of conventional CA-SCLF and avoid complexity and memory overhead. Our test results show that we can effectively improve the BLER performance by 0.11dB compared to prior work and reduce the complexity and memory overhead by over 30% of the network.
1912.05158
https://arxiv.org/abs/1912.05158v1
https://arxiv.org/pdf/1912.05158v1.pdf
[ "Test results" ]
[ "Memory Network" ]
[]
IeqnxnmR83
https://paperswithcode.com/paper/unsupervised-domain-adaptation-for-automatic
Unsupervised Domain Adaptation for Automatic Estimation of Cardiothoracic Ratio
The cardiothoracic ratio (CTR), a clinical metric of heart size in chest X-rays (CXRs), is a key indicator of cardiomegaly. Manual measurement of CTR is time-consuming and can be affected by human subjectivity, making it desirable to design computer-aided systems that assist clinicians in the diagnosis process. Automatic CTR estimation through chest organ segmentation, however, requires large amounts of pixel-level annotated data, which is often unavailable. To alleviate this problem, we propose an unsupervised domain adaptation framework based on adversarial networks. The framework learns domain invariant feature representations from openly available data sources to produce accurate chest organ segmentation for unlabeled datasets. Specifically, we propose a model that enforces our intuition that prediction masks should be domain independent. Hence, we introduce a discriminator that distinguishes segmentation predictions from ground truth masks. We evaluate our system's prediction based on the assessment of radiologists and demonstrate the clinical practicability for the diagnosis of cardiomegaly. We finally illustrate on the JSRT dataset that the semi-supervised performance of our model is also very promising.
1807.03434
http://arxiv.org/abs/1807.03434v1
http://arxiv.org/pdf/1807.03434v1.pdf
[ "Domain Adaptation", "Unsupervised Domain Adaptation" ]
[]
[]
eYgLZ3zQXi
https://paperswithcode.com/paper/artificial-intelligence-and-machine-learning
Artificial Intelligence and Machine Learning to Predict and Improve Efficiency in Manufacturing Industry
The overall equipment effectiveness (OEE) is a performance measurement metric widely used. Its calculation provides to the managers the possibility to identify the main losses that reduce the machine effectiveness and then take the necessary decisions in order to improve the situation. However, this calculation is done a-posterior which is often too late. In the present research, we implemented different Machine Learning algorithms namely; Support vector machine, Optimized Support vector Machine (using Genetic Algorithm), Random Forest, XGBoost and Deep Learning to predict the estimate OEE value. The data used to train our models was provided by an automotive cable production industry. The results show that the Deep Learning and Random Forest are more accurate and present better performance for the prediction of the overall equipment effectiveness in our case study.
1901.02256
http://arxiv.org/abs/1901.02256v2
http://arxiv.org/pdf/1901.02256v2.pdf
[]
[]
[]
Xh5AmCXJyX
https://paperswithcode.com/paper/monotonic-cardinality-estimation-of
Monotonic Cardinality Estimation of Similarity Selection: A Deep Learning Approach
Due to the outstanding capability of capturing underlying data distributions, deep learning techniques have been recently utilized for a series of traditional database problems. In this paper, we investigate the possibilities of utilizing deep learning for cardinality estimation of similarity selection. Answering this problem accurately and efficiently is essential to many data management applications, especially for query optimization. Moreover, in some applications the estimated cardinality is supposed to be consistent and interpretable. Hence a monotonic estimation w.r.t. the query threshold is preferred. We propose a novel and generic method that can be applied to any data type and distance function. Our method consists of a feature extraction model and a regression model. The feature extraction model transforms original data and threshold to a Hamming space, in which a deep learning-based regression model is utilized to exploit the incremental property of cardinality w.r.t. the threshold for both accuracy and monotonicity. We develop a training strategy tailored to our model as well as techniques for fast estimation. We also discuss how to handle updates. We demonstrate the accuracy and the efficiency of our method through experiments, and show how it improves the performance of a query optimizer.
2002.06442
https://arxiv.org/abs/2002.06442v3
https://arxiv.org/pdf/2002.06442v3.pdf
[]
[]
[]
a2F9F5U-U2
https://paperswithcode.com/paper/the-image-torque-operator-for-contour
The Image Torque Operator for Contour Processing
Contours are salient features for image description, but the detection and localization of boundary contours is still considered a challenging problem. This paper introduces a new tool for edge processing implementing the Gestaltism idea of edge grouping. This tool is a mid-level image operator, called the Torque operator, that is designed to help detect closed contours in images. The torque operator takes as input the raw image and creates an image map by computing from the image gradients within regions of multiple sizes a measure of how well the edges are aligned to form closed convex contours. Fundamental properties of the torque are explored and illustrated through examples. Then it is applied in pure bottom-up processing in a variety of applications, including edge detection, visual attention and segmentation and experimentally demonstrated a useful tool that can improve existing techniques. Finally, its extension as a more general grouping mechanism and application in object recognition is discussed.
1601.04669
http://arxiv.org/abs/1601.04669v1
http://arxiv.org/pdf/1601.04669v1.pdf
[ "Edge Detection", "Object Recognition" ]
[]
[]
xrpM1fO24k
https://paperswithcode.com/paper/weak-supervision-enhanced-generative-network
Weak Supervision Enhanced Generative Network for Question Generation
Automatic question generation according to an answer within the given passage is useful for many applications, such as question answering system, dialogue system, etc. Current neural-based methods mostly take two steps which extract several important sentences based on the candidate answer through manual rules or supervised neural networks and then use an encoder-decoder framework to generate questions about these sentences. These approaches neglect the semantic relations between the answer and the context of the whole passage which is sometimes necessary for answering the question. To address this problem, we propose the Weak Supervision Enhanced Generative Network (WeGen) which automatically discovers relevant features of the passage given the answer span in a weakly supervised manner to improve the quality of generated questions. More specifically, we devise a discriminator, Relation Guider, to capture the relations between the whole passage and the associated answer and then the Multi-Interaction mechanism is deployed to transfer the knowledge dynamically for our question generation system. Experiments show the effectiveness of our method in both automatic evaluations and human evaluations.
1907.00607
https://arxiv.org/abs/1907.00607v1
https://arxiv.org/pdf/1907.00607v1.pdf
[ "Question Answering", "Question Generation" ]
[]
[]
ckL3e9vJn6
https://paperswithcode.com/paper/video-question-generation-via-cross-modal
Video Question Generation via Cross-Modal Self-Attention Networks Learning
We introduce a novel task, Video Question Generation (Video QG). A Video QG model automatically generates questions given a video clip and its corresponding dialogues. Video QG requires a range of skills -- sentence comprehension, temporal relation, the interplay between vision and language, and the ability to ask meaningful questions. To address this, we propose a novel semantic rich cross-modal self-attention (SRCMSA) network to aggregate the multi-modal and diverse features. To be more precise, we enhance the video frames semantic by integrating the object-level information, and we jointly consider the cross-modal attention for the video question generation task. Excitingly, our proposed model remarkably improves the baseline from 7.58 to 14.48 in the BLEU-4 score on the TVQA dataset. Most of all, we arguably pave a novel path toward understanding the challenging video input and we provide detailed analysis in terms of diversity, which ushers the avenues for future investigations.
1907.03049
https://arxiv.org/abs/1907.03049v3
https://arxiv.org/pdf/1907.03049v3.pdf
[ "Question Answering", "Question Generation", "Video Question Answering" ]
[]
[]
A72Hs7VugK
https://paperswithcode.com/paper/figure-captioning-with-reasoning-and-sequence
Figure Captioning with Reasoning and Sequence-Level Training
Figures, such as bar charts, pie charts, and line plots, are widely used to convey important information in a concise format. They are usually human-friendly but difficult for computers to process automatically. In this work, we investigate the problem of figure captioning where the goal is to automatically generate a natural language description of the figure. While natural image captioning has been studied extensively, figure captioning has received relatively little attention and remains a challenging problem. First, we introduce a new dataset for figure captioning, FigCAP, based on FigureQA. Second, we propose two novel attention mechanisms. To achieve accurate generation of labels in figures, we propose Label Maps Attention. To model the relations between figure labels, we propose Relation Maps Attention. Third, we use sequence-level training with reinforcement learning in order to directly optimizes evaluation metrics, which alleviates the exposure bias issue and further improves the models in generating long captions. Extensive experiments show that the proposed method outperforms the baselines, thus demonstrating a significant potential for the automatic captioning of vast repositories of figures.
1906.02850
https://arxiv.org/abs/1906.02850v1
https://arxiv.org/pdf/1906.02850v1.pdf
[ "Image Captioning" ]
[ "LINE" ]
[]
ps_8zQV2Kw
https://paperswithcode.com/paper/exploiting-temporal-coherence-for-multi-modal
Exploiting Temporal Coherence for Multi-modal Video Categorization
Multimodal ML models can process data in multiple modalities (e.g., video, images, audio, text) and are useful for video content analysis in a variety of problems (e.g., object detection, scene understanding). In this paper, we focus on the problem of video categorization by using a multimodal approach. We have developed a novel temporal coherence-based regularization approach, which applies to different types of models (e.g., RNN, NetVLAD, Transformer). We demonstrate through experiments how our proposed multimodal video categorization models with temporal coherence out-perform strong state-of-the-art baseline models.
2002.03844
https://arxiv.org/abs/2002.03844v2
https://arxiv.org/pdf/2002.03844v2.pdf
[ "Object Detection", "Scene Understanding" ]
[]
[]
ZczUCMniHi
https://paperswithcode.com/paper/sipa-a-simple-framework-for-efficient
SIPA: A Simple Framework for Efficient Networks
With the success of deep learning in various fields and the advent of numerous Internet of Things (IoT) devices, it is essential to lighten models suitable for low-power devices. In keeping with this trend, MicroNet Challenge, which is the challenge to build efficient models from the view of both storage and computation, was hosted at NeurIPS 2019. To develop efficient models through this challenge, we propose a framework, coined as SIPA, consisting of four stages: Searching, Improving, Pruning, and Accelerating. With the proposed framework, our team, OSI AI, compressed 334x the parameter storage and 357x the math operation compared to WideResNet-28-10 and took 4th place in the CIFAR-100 track at MicroNet Challenge 2019 with the top 10% highly efficient computation. Our source code is available from https://github.com/Lee-Gihun/MicroNet_OSI-AI.
2004.14476
https://arxiv.org/abs/2004.14476v1
https://arxiv.org/pdf/2004.14476v1.pdf
[]
[]
[]
wy0tLSPjLj
https://paperswithcode.com/paper/i-flow-high-dimensional-integration-and
i-flow: High-dimensional Integration and Sampling with Normalizing Flows
In many fields of science, high-dimensional integration is required. Numerical methods have been developed to evaluate these complex integrals. We introduce the code i-flow, a python package that performs high-dimensional numerical integration utilizing normalizing flows. Normalizing flows are machine-learned, bijective mappings between two distributions. i-flow can also be used to sample random points according to complicated distributions in high dimensions. We compare i-flow to other algorithms for high-dimensional numerical integration and show that i-flow outperforms them for high dimensional correlated integrals. The i-flow code is publicly available on gitlab at https://gitlab.com/i-flow/i-flow.
2001.05486
https://arxiv.org/abs/2001.05486v2
https://arxiv.org/pdf/2001.05486v2.pdf
[]
[ "Normalizing Flows" ]
[]
2BXddAo3hq
https://paperswithcode.com/paper/adaptive-pricing-in-insurance-generalized
Adaptive Pricing in Insurance: Generalized Linear Models and Gaussian Process Regression Approaches
We study the application of dynamic pricing to insurance. We view this as an online revenue management problem where the insurance company looks to set prices to optimize the long-run revenue from selling a new insurance product. We develop two pricing models: an adaptive Generalized Linear Model (GLM) and an adaptive Gaussian Process (GP) regression model. Both balance between exploration, where we choose prices in order to learn the distribution of demands & claims for the insurance product, and exploitation, where we myopically choose the best price from the information gathered so far. The performance of the pricing policies is measured in terms of regret: the expected revenue loss caused by not using the optimal price. As is commonplace in insurance, we model demand and claims by GLMs. In our adaptive GLM design, we use the maximum quasi-likelihood estimation (MQLE) to estimate the unknown parameters. We show that, if prices are chosen with suitably decreasing variability, the MQLE parameters eventually exist and converge to the correct values, which in turn implies that the sequence of chosen prices will also converge to the optimal price. In the adaptive GP regression model, we sample demand and claims from Gaussian Processes and then choose selling prices by the upper confidence bound rule. We also analyze these GLM and GP pricing algorithms with delayed claims. Although similar results exist in other domains, this is among the first works to consider dynamic pricing problems in the field of insurance. We also believe this is the first work to consider Gaussian Process regression in the context of insurance pricing. These initial findings suggest that online machine learning algorithms could be a fruitful area of future investigation and application in insurance.
1907.05381
https://arxiv.org/abs/1907.05381v1
https://arxiv.org/pdf/1907.05381v1.pdf
[ "Gaussian Processes" ]
[ "Gaussian Process" ]
[]
M9gSmGy3iD
https://paperswithcode.com/paper/modeling-severe-traffic-accidents-with
Modeling Severe Traffic Accidents With Spatial And Temporal Features
We present an approach to estimate the severity of traffic related accidents in aggregated (area-level) and disaggregated (point level) data. Exploring spatial features, we measure complexity of road networks using several area level variables. Also using temporal and other situational features from open data for New York City, we use Gradient Boosting models for inference and measuring feature importance along with Gaussian Processes to model spatial dependencies in the data. The results show significant importance of complexity in aggregated model as well as as other features in prediction which may be helpful in framing policies and targeting interventions for preventing severe traffic related accidents and injuries.
1906.10317
https://arxiv.org/abs/1906.10317v1
https://arxiv.org/pdf/1906.10317v1.pdf
[ "Feature Importance", "Gaussian Processes" ]
[]
[]
b11RDigFxq
https://paperswithcode.com/paper/building-a-learner-corpus
Building a learner corpus
The paper describes a corpus of texts produced by non-native speakers of Czech. We discuss its annotation scheme, consisting of three interlinked levels to cope with a wide range of error types present in the input. Each level corrects different types of errors; links between the levels allow capturing errors in word order and complex discontinuous expressions. Errors are not only corrected, but also classified. The annotation scheme is tested on a doubly-annotated sample of approx. 10,000 words with fair inter-annotator agreement results. We also explore options of application of automated linguistic annotation tools (taggers, spell checkers and grammar checkers) on the learner text to support or even substitute manual annotation.
null
https://www.aclweb.org/anthology/L12-1591/
http://www.lrec-conf.org/proceedings/lrec2012/pdf/992_Paper.pdf
[ "Language Acquisition" ]
[]
[]
UDRtO-SsTz
https://paperswithcode.com/paper/learning-to-capture-light-fields-through-a
Learning to Capture Light Fields through a Coded Aperture Camera
We propose a learning-based framework for acquiring a light field through a coded aperture camera. Acquiring a light field is a challenging task due to the amount of data. To make the acquisition process efficient, coded aperture cameras were successfully adopted; using these cameras, a light field is computationally reconstructed from several images that are acquired with different aperture patterns. However, it is still difficult to reconstruct a high-quality light field from only a few acquired images. To tackle this limitation, we formulated the entire pipeline of light field acquisition from the perspective of an auto-encoder. This auto-encoder was implemented as a stack of fully convolutional layers and was trained end-to-end by using a collection of training samples. We experimentally show that our method can successfully learn good image-acquisition and reconstruction strategies. With our method, light fields consisting of 5 x 5 or 8 x 8 images can be successfully reconstructed only from a few acquired images. Moreover, our method achieved superior performance over several state-of-the-art methods. We also applied our method to a real prototype camera to show that it is capable of capturing a real 3-D scene.
null
http://openaccess.thecvf.com/content_ECCV_2018/html/Yasutaka_Inagaki_Learning_to_Capture_ECCV_2018_paper.html
http://openaccess.thecvf.com/content_ECCV_2018/papers/Yasutaka_Inagaki_Learning_to_Capture_ECCV_2018_paper.pdf
[]
[]
[]
8rcWON3uTp
https://paperswithcode.com/paper/finite-ltl-synthesis-with-environment
Finite LTL Synthesis with Environment Assumptions and Quality Measures
In this paper, we investigate the problem of synthesizing strategies for linear temporal logic (LTL) specifications that are interpreted over finite traces -- a problem that is central to the automated construction of controllers, robot programs, and business processes. We study a natural variant of the finite LTL synthesis problem in which strategy guarantees are predicated on specified environment behavior. We further explore a quantitative extension of LTL that supports specification of quality measures, utilizing it to synthesize high-quality strategies. We propose new notions of optimality and associated algorithms that yield strategies that best satisfy specified quality measures. Our algorithms utilize an automata-game approach, positioning them well for future implementation via existing state-of-the-art techniques.
1808.10831
http://arxiv.org/abs/1808.10831v1
http://arxiv.org/pdf/1808.10831v1.pdf
[ "Temporal Logic" ]
[]
[]
yN5jm9s30Q
https://paperswithcode.com/paper/antiplag-plagiarism-detection-on-electronic
AntiPlag: Plagiarism Detection on Electronic Submissions of Text Based Assignments
Plagiarism is one of the growing issues in academia and is always a concern in Universities and other academic institutions. The situation is becoming even worse with the availability of ample resources on the web. This paper focuses on creating an effective and fast tool for plagiarism detection for text based electronic assignments. Our plagiarism detection tool named AntiPlag is developed using the tri-gram sequence matching technique. Three sets of text based assignments were tested by AntiPlag and the results were compared against an existing commercial plagiarism detection tool. AntiPlag showed better results in terms of false positives compared to the commercial tool due to the pre-processing steps performed in AntiPlag. In addition, to improve the detection latency, AntiPlag applies a data clustering technique making it four times faster than the commercial tool considered. AntiPlag could be used to isolate plagiarized text based assignments from non-plagiarised assignments easily. Therefore, we present AntiPlag, a fast and effective tool for plagiarism detection on text based electronic assignments.
1403.1310
http://arxiv.org/abs/1403.1310v1
http://arxiv.org/pdf/1403.1310v1.pdf
[]
[]
[]
-eBzi6rlxH
https://paperswithcode.com/paper/simplenlg-de-adapting-simplenlg-4-to-german
SimpleNLG-DE: Adapting SimpleNLG 4 to German
SimpleNLG is a popular open source surface realiser for the English language. For German, however, the availability of open source and non-domain specific realisers is sparse, partly due to the complexity of the German language. In this paper, we present SimpleNLG-DE, an adaption of SimpleNLG to German. We discuss which parts of the German language have been implemented and how we evaluated our implementation using the TIGER Corpus and newly created data-sets.
null
https://www.aclweb.org/anthology/W19-8651/
https://www.aclweb.org/anthology/W19-8651
[]
[]
[]
ya8Q4De5Lr
https://paperswithcode.com/paper/cognitive-discriminative-mappings-for-rapid
Cognitive Discriminative Mappings for Rapid Learning
Humans can learn concepts or recognize items from just a handful of examples, while machines require many more samples to perform the same task. In this paper, we build a computational model to investigate the possibility of this kind of rapid learning. The proposed method aims to improve the learning task of input from sensory memory by leveraging the information retrieved from long-term memory. We present a simple and intuitive technique called cognitive discriminative mappings (CDM) to explore the cognitive problem. First, CDM separates and clusters the data instances retrieved from long-term memory into distinct classes with a discrimination method in working memory when a sensory input triggers the algorithm. CDM then maps each sensory data instance to be as close as possible to the median point of the data group with the same class. The experimental results demonstrate that the CDM approach is effective for learning the discriminative features of supervised classifications with few training sensory input instances.
1611.02512
http://arxiv.org/abs/1611.02512v1
http://arxiv.org/pdf/1611.02512v1.pdf
[]
[]
[]
tG-4zSAcAv
https://paperswithcode.com/paper/the-capacity-constrained-facility-location
The Capacity Constrained Facility Location problem
We initiate the study of the capacity constrained facility location problem from a mechanism design perspective. The capacity constrained setting leads to a new strategic environment where a facility serves a subset of the population, which is endogenously determined by the ex-post Nash equilibrium of an induced subgame and is not directly controlled by the mechanism designer. Our focus is on mechanisms that are ex-post dominant-strategy incentive compatible (DIC) at the reporting stage. We provide a complete characterization of DIC mechanisms via the family of Generalized Median Mechanisms (GMMs). In general, the social welfare optimal mechanism is not DIC. Adopting the worst-case approximation measure, we attain tight lower bounds on the approximation ratio of any DIC mechanism. The well-known median mechanism is shown to be optimal among the family of DIC mechanisms for certain capacity ranges. Surprisingly, the framework we introduce provides a new characterization for the family of GMMs, and is responsive to gaps in the current social choice literature highlighted by Border and Jordan (1983) and Barbar{\`a}, Mass{\'o} and Serizawa (1998).
1806.00960
http://arxiv.org/abs/1806.00960v2
http://arxiv.org/pdf/1806.00960v2.pdf
[]
[]
[]
GBx1gzz3mc
https://paperswithcode.com/paper/density-matching-for-bilingual-word-embedding
Density Matching for Bilingual Word Embedding
Recent approaches to cross-lingual word embedding have generally been based on linear transformations between the sets of embedding vectors in the two languages. In this paper, we propose an approach that instead expresses the two monolingual embedding spaces as probability densities defined by a Gaussian mixture model, and matches the two densities using a method called normalizing flow. The method requires no explicit supervision, and can be learned with only a seed dictionary of words that have identical strings. We argue that this formulation has several intuitively attractive properties, particularly with the respect to improving robustness and generalization to mappings between difficult language pairs or word pairs. On a benchmark data set of bilingual lexicon induction and cross-lingual word similarity, our approach can achieve competitive or superior performance compared to state-of-the-art published results, with particularly strong results being found on etymologically distant and/or morphologically rich languages.
1904.02343
http://arxiv.org/abs/1904.02343v3
http://arxiv.org/pdf/1904.02343v3.pdf
[ "Word Embeddings" ]
[]
[]
IgYMhKN6sj
https://paperswithcode.com/paper/time-for-a-change-a-tutorial-for-comparing
Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis
The machine learning community adopted the use of null hypothesis significance testing (NHST) in order to ensure the statistical validity of results. Many scientific fields however realized the shortcomings of frequentist reasoning and in the most radical cases even banned its use in publications. We should do the same: just as we have embraced the Bayesian paradigm in the development of new machine learning methods, so we should also use it in the analysis of our own results. We argue for abandonment of NHST by exposing its fallacies and, more importantly, offer better - more sound and useful - alternatives for it.
1606.04316
http://arxiv.org/abs/1606.04316v3
http://arxiv.org/pdf/1606.04316v3.pdf
[]
[]
[]
je44vQbZ8O
https://paperswithcode.com/paper/bringing-order-to-neural-word-embeddings-with
Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)
Word order is clearly a vital part of human language, but it has been used comparatively lightly in distributional vector models. This paper presents a new method for incorporating word order information into word vector embedding models by combining the benefits of permutation-based order encoding with the more recent method of skip-gram with negative sampling. The new method introduced here is called Embeddings Augmented by Random Permutations (EARP). It operates by applying permutations to the coordinates of context vector representations during the process of training. Results show an 8{\%} improvement in accuracy on the challenging Bigger Analogy Test Set, and smaller but consistent improvements on other analogy reference sets. These findings demonstrate the importance of order-based information in analogical retrieval tasks, and the utility of random permutations as a means to augment neural embeddings.
null
https://www.aclweb.org/anthology/K18-1045/
https://www.aclweb.org/anthology/K18-1045
[ "Word Embeddings" ]
[]
[]
YN7d0pUW3f
https://paperswithcode.com/paper/demystifying-resnet
Demystifying ResNet
The Residual Network (ResNet), proposed in He et al. (2015), utilized shortcut connections to significantly reduce the difficulty of training, which resulted in great performance boosts in terms of both training and generalization error. It was empirically observed in He et al. (2015) that stacking more layers of residual blocks with shortcut 2 results in smaller training error, while it is not true for shortcut of length 1 or 3. We provide a theoretical explanation for the uniqueness of shortcut 2. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The shortcut 1, however, is essentially equivalent to no shortcuts, which has a condition number exploding to infinity as the number of layers grows. We further argue that as the number of layers tends to infinity, it suffices to only look at the loss function at the zero initial point. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with shortcut 2 achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.
1611.01186
http://arxiv.org/abs/1611.01186v2
http://arxiv.org/pdf/1611.01186v2.pdf
[]
[]
[]
YVyWfLFUbM
https://paperswithcode.com/paper/image-restoration-a-general-wavelet-frame
Image Restoration: A General Wavelet Frame Based Model and Its Asymptotic Analysis
Image restoration is one of the most important areas in imaging science. Mathematical tools have been widely used in image restoration, where wavelet frame based approach is one of the successful examples. In this paper, we introduce a generic wavelet frame based image restoration model, called the "general model", which includes most of the existing wavelet frame based models as special cases. Moreover, the general model also includes examples that are new to the literature. Motivated by our earlier studies [1-3], We provide an asymptotic analysis of the general model as image resolution goes to infinity, which establishes a connection between the general model in discrete setting and a new variatonal model in continuum setting. The variational model also includes some of the existing variational models as special cases, such as the total generalized variational model proposed by [4]. In the end, we introduce an algorithm solving the general model and present one numerical simulation as an example.
1602.05332
http://arxiv.org/abs/1602.05332v1
http://arxiv.org/pdf/1602.05332v1.pdf
[ "Image Restoration" ]
[]
[]
8H93Lrs8u2
https://paperswithcode.com/paper/feature-weighting-and-boosting-for-few-shot
Feature Weighting and Boosting for Few-Shot Segmentation
This paper is about few-shot segmentation of foreground objects in images. We train a CNN on small subsets of training images, each mimicking the few-shot setting. In each subset, one image serves as the query and the other(s) as support image(s) with ground-truth segmentation. The CNN first extracts feature maps from the query and support images. Then, a class feature vector is computed as an average of the support's feature maps over the known foreground. Finally, the target object is segmented in the query image by using a cosine similarity between the class feature vector and the query's feature map. We make two contributions by: (1) Improving discriminativeness of features so their activations are high on the foreground and low elsewhere; and (2) Boosting inference with an ensemble of experts guided with the gradient of loss incurred when segmenting the support images in testing. Our evaluations on the PASCAL-$5^i$ and COCO-$20^i$ datasets demonstrate that we significantly outperform existing approaches.
1909.13140
https://arxiv.org/abs/1909.13140v1
https://arxiv.org/pdf/1909.13140v1.pdf
[]
[]
[]
kFCpaHK3sT
https://paperswithcode.com/paper/the-conditional-entropy-bottleneck-1
The Conditional Entropy Bottleneck
Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.
2002.05379
https://arxiv.org/abs/2002.05379v1
https://arxiv.org/pdf/2002.05379v1.pdf
[]
[]
[]
8l76wjDm2i
https://paperswithcode.com/paper/region-and-location-based-indexing-and
Region and Location Based Indexing and Retrieval of MR-T2 Brain Tumor Images
In this paper, region based and location based retrieval systems have been implemented for retrieval of MR-T2 axial 2-D brain images. This is done by extracting and characterizing the tumor portion of 2-D brain slices by use of a suitable threshold computed over the entire image. Indexing and retrieval is then performed by computing texture features based on gray-tone spatial-dependence matrix of segmented regions. A Hash structure is used to index all images. A combined index is adopted to point to all similar images in terms of the texture features. At query time, only those images that are in the same hash bucket as those of the queried image are compared for similarity, thus reducing the search space and time.
1312.2061
http://arxiv.org/abs/1312.2061v1
http://arxiv.org/pdf/1312.2061v1.pdf
[]
[]
[]
_FO6iMlhtR
https://paperswithcode.com/paper/asynchronous-convolutional-networks-for
Asynchronous Convolutional Networks for Object Detection in Neuromorphic Cameras
Event-based cameras, also known as neuromorphic cameras, are bioinspired sensors able to perceive changes in the scene at high frequency with low power consumption. Becoming available only very recently, a limited amount of work addresses object detection on these devices. In this paper we propose two neural networks architectures for object detection: YOLE, which integrates the events into surfaces and uses a frame-based model to process them, and fcYOLE, an asynchronous event-based fully convolutional network which uses a novel and general formalization of the convolutional and max pooling layers to exploit the sparsity of camera events. We evaluate the algorithm with different extensions of publicly available datasets and on a novel synthetic dataset.
1805.07931
https://arxiv.org/abs/1805.07931v3
https://arxiv.org/pdf/1805.07931v3.pdf
[ "Object Detection" ]
[ "Max Pooling" ]
[]
_JNZqflWgX
https://paperswithcode.com/paper/behavenet-nonlinear-embedding-and-bayesian
BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
A fundamental goal of systems neuroscience is to understand the relationship between neural activity and behavior. Behavior has traditionally been characterized by low-dimensional, task-related variables such as movement speed or response times. More recently, there has been a growing interest in automated analysis of high-dimensional video data collected during experiments. Here we introduce a probabilistic framework for the analysis of behavioral video and neural activity. This framework provides tools for compression, segmentation, generation, and decoding of behavioral videos. Compression is performed using a convolutional autoencoder (CAE), which yields a low-dimensional continuous representation of behavior. We then use an autoregressive hidden Markov model (ARHMM) to segment the CAE representation into discrete "behavioral syllables." The resulting generative model can be used to simulate behavioral video data. Finally, based on this generative model, we develop a novel Bayesian decoding approach that takes in neural activity and outputs probabilistic estimates of the full-resolution behavioral video. We demonstrate this framework on two different experimental paradigms using distinct behavioral and neural recording technologies.
null
http://papers.nips.cc/paper/9701-behavenet-nonlinear-embedding-and-bayesian-neural-decoding-of-behavioral-videos
http://papers.nips.cc/paper/9701-behavenet-nonlinear-embedding-and-bayesian-neural-decoding-of-behavioral-videos.pdf
[]
[ "AutoEncoder" ]
[]
4r684YfCem
https://paperswithcode.com/paper/replica-exchange-for-non-convex-optimization
Replica Exchange for Non-Convex Optimization
Gradient descent (GD) is known to converge quickly for convex objective functions, but it can be trapped at local minimums. On the other hand, Langevin dynamics (LD) can explore the state space and find global minimums, but in order to give accurate estimates, LD needs to run with small discretization stepsize and weak stochastic force, which in general slow down its convergence. This paper shows that these two algorithms can "collaborate" through a simple exchange mechanism, in which they swap their current positions if LD yields a lower objective function. This idea can be seen as the singular limit of the replica exchange technique from the sampling literature. We show that this new algorithm converges to the global minimum linearly with high probability, assuming the objective function is strongly convex in a neighborhood of the unique global minimum. By replacing gradients with stochastic gradients, and adding a proper threshold to the exchange mechanism, our algorithm can also be used in online settings. We further verify our theoretical results through some numerical experiments, and observe superior performance of the proposed algorithm over running GD or LD alone.
2001.08356
https://arxiv.org/abs/2001.08356v2
https://arxiv.org/pdf/2001.08356v2.pdf
[]
[]
[]
AJkvXTfnlz
https://paperswithcode.com/paper/finding-a-maximum-clique-using-ant-colony
Finding a Maximum Clique using Ant Colony Optimization and Particle Swarm Optimization in Social Networks
Interaction between users in online social networks plays a key role in social network analysis. One on important types of social group is full connected relation between some users, which known as clique structure. Therefore finding a maximum clique is essential for some analysis. In this paper, we proposed a new method using ant colony optimization algorithm and particle swarm optimization algorithm. In the proposed method, in order to attain better results, it is improved process of pheromone update by particle swarm optimization. Simulation results on popular standard social network benchmarks in comparison standard ant colony optimization algorithm are shown a relative enhancement of proposed algorithm.
1311.7213
http://arxiv.org/abs/1311.7213v1
http://arxiv.org/pdf/1311.7213v1.pdf
[]
[]
[]
0XCiHLTN8B
https://paperswithcode.com/paper/retinal-fluid-segmentation-and-detection-in
Retinal Fluid Segmentation and Detection in Optical Coherence Tomography Images using Fully Convolutional Neural Network
As a non-invasive imaging modality, optical coherence tomography (OCT) can provide micrometer-resolution 3D images of retinal structures. Therefore it is commonly used in the diagnosis of retinal diseases associated with edema in and under the retinal layers. In this paper, a new framework is proposed for the task of fluid segmentation and detection in retinal OCT images. Based on the raw images and layers segmented by a graph-cut algorithm, a fully convolutional neural network was trained to recognize and label the fluid pixels. Random forest classification was performed on the segmented fluid regions to detect and reject the falsely labeled fluid regions. The leave-one-out cross validation experiments on the RETOUCH database show that our method performs well in both segmentation (mean Dice: 0.7317) and detection (mean AUC: 0.985) tasks.
1710.04778
http://arxiv.org/abs/1710.04778v1
http://arxiv.org/pdf/1710.04778v1.pdf
[]
[]
[]
veN-eEG88h
https://paperswithcode.com/paper/regularized-diffusion-adaptation-via
Regularized Diffusion Adaptation via Conjugate Smoothing
The purpose of this work is to develop and study a distributed strategy for Pareto optimization of an aggregate cost consisting of regularized risks. Each risk is modeled as the expectation of some loss function with unknown probability distribution while the regularizers are assumed deterministic, but are not required to be differentiable or even continuous. The individual, regularized, cost functions are distributed across a strongly-connected network of agents and the Pareto optimal solution is sought by appealing to a multi-agent diffusion strategy. To this end, the regularizers are smoothed by means of infimal convolution and it is shown that the Pareto solution of the approximate, smooth problem can be made arbitrarily close to the solution of the original, non-smooth problem. Performance bounds are established under conditions that are weaker than assumed before in the literature, and hence applicable to a broader class of adaptation and learning problems.
1909.09417
https://arxiv.org/abs/1909.09417v1
https://arxiv.org/pdf/1909.09417v1.pdf
[]
[ "Convolution" ]
[]
PqknNwdn-C
https://paperswithcode.com/paper/generalized-ambiguity-decomposition-for
Generalized Ambiguity Decomposition for Understanding Ensemble Diversity
Diversity or complementarity of experts in ensemble pattern recognition and information processing systems is widely-observed by researchers to be crucial for achieving performance improvement upon fusion. Understanding this link between ensemble diversity and fusion performance is thus an important research question. However, prior works have theoretically characterized ensemble diversity and have linked it with ensemble performance in very restricted settings. We present a generalized ambiguity decomposition (GAD) theorem as a broad framework for answering these questions. The GAD theorem applies to a generic convex ensemble of experts for any arbitrary twice-differentiable loss function. It shows that the ensemble performance approximately decomposes into a difference of the average expert performance and the diversity of the ensemble. It thus provides a theoretical explanation for the empirically-observed benefit of fusing outputs from diverse classifiers and regressors. It also provides a loss function-dependent, ensemble-dependent, and data-dependent definition of diversity. We present extensions of this decomposition to common regression and classification loss functions, and report a simulation-based analysis of the diversity term and the accuracy of the decomposition. We finally present experiments on standard pattern recognition data sets which indicate the accuracy of the decomposition for real-world classification and regression problems.
1312.7463
http://arxiv.org/abs/1312.7463v1
http://arxiv.org/pdf/1312.7463v1.pdf
[]
[]
[]
5ABen0H6XI
https://paperswithcode.com/paper/learning-and-evaluating-sparse-interpretable
Learning and Evaluating Sparse Interpretable Sentence Embeddings
Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data. In this paper, we transfer this idea to sentence embeddings and explore several approaches to obtain a sparse representation. We further introduce a novel, quantitative and automated evaluation metric for sentence embedding interpretability, based on topic coherence methods. We observe an increase in interpretability compared to dense models, on a dataset of movie dialogs and on the scene descriptions from the MS COCO dataset.
1809.08621
http://arxiv.org/abs/1809.08621v2
http://arxiv.org/pdf/1809.08621v2.pdf
[ "Sentence Embedding", "Sentence Embeddings", "Word Embeddings" ]
[]
[]
tKSnY-Nipi
https://paperswithcode.com/paper/context-aware-nonnegative-matrix
Context Aware Nonnegative Matrix Factorization Clustering
In this article we propose a method to refine the clustering results obtained with the nonnegative matrix factorization (NMF) technique, imposing consistency constraints on the final labeling of the data. The research community focused its effort on the initialization and on the optimization part of this method, without paying attention to the final cluster assignments. We propose a game theoretic framework in which each object to be clustered is represented as a player, which has to choose its cluster membership. The information obtained with NMF is used to initialize the strategy space of the players and a weighted graph is used to model the interactions among the players. These interactions allow the players to choose a cluster which is coherent with the clusters chosen by similar players, a property which is not guaranteed by NMF, since it produces a soft clustering of the data. The results on common benchmarks show that our model is able to improve the performances of many NMF formulations.
1609.04628
http://arxiv.org/abs/1609.04628v1
http://arxiv.org/pdf/1609.04628v1.pdf
[]
[]
[]
8-BqavfJt0
https://paperswithcode.com/paper/simulated-autonomous-driving-in-a-realistic
Simulated Autonomous Driving in a Realistic Driving Environment using Deep Reinforcement Learning and a Deterministic Finite State Machine
In the field of Autonomous Driving, the system controlling the vehicle can be seen as an agent acting in a complex environment and thus naturally fits into the modern framework of Reinforcement Learning. However, learning to drive can be a challenging task and current results are often restricted to simplified driving environments. To advance the field, we present a method to adaptively restrict the action space of the agent according to its current driving situation and show that it can be used to swiftly learn to drive in a realistic environment based on the Deep Q-Network algorithm.
1811.07868
http://arxiv.org/abs/1811.07868v2
http://arxiv.org/pdf/1811.07868v2.pdf
[ "Autonomous Driving" ]
[]
[]
CiGZbyoYFZ
https://paperswithcode.com/paper/is-the-red-square-big-malevic-modeling
Is the Red Square Big? MALeViC: Modeling Adjectives Leveraging Visual Contexts
This work aims at modeling how the meaning of gradable adjectives of size (`big', `small') can be learned from visually-grounded contexts. Inspired by cognitive and linguistic evidence showing that the use of these expressions relies on setting a threshold that is dependent on a specific context, we investigate the ability of multi-modal models in assessing whether an object is `big' or `small' in a given visual scene. In contrast with the standard computational approach that simplistically treats gradable adjectives as `fixed' attributes, we pose the problem as relational: to be successful, a model has to consider the full visual context. By means of four main tasks, we show that state-of-the-art models (but not a relatively strong baseline) can learn the function subtending the meaning of size adjectives, though their performance is found to decrease while moving from simple to more complex tasks. Crucially, models fail in developing abstract representations of gradable adjectives that can be used compositionally.
1908.10285
https://arxiv.org/abs/1908.10285v1
https://arxiv.org/pdf/1908.10285v1.pdf
[]
[]
[]