uid
int64 4
318k
| paper_url
stringlengths 39
81
| arxiv_id
stringlengths 9
16
⌀ | title
stringlengths 6
365
| abstract
stringlengths 0
7.27k
| url_abs
stringlengths 17
601
| url_pdf
stringlengths 21
819
| proceeding
stringlengths 7
1.03k
⌀ | authors
sequence | tasks
sequence | date
float64 422B
1,672B
⌀ | methods
list | __index_level_0__
int64 1
197k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
21,039 | https://paperswithcode.com/paper/soft-label-memorization-generalization-for | 1702.08563 | Soft Label Memorization-Generalization for Natural Language Inference | Often when multiple labels are obtained for a training example it is assumed
that there is an element of noise that must be accounted for. It has been shown
that this disagreement can be considered signal instead of noise. In this work
we investigate using soft labels for training data to improve generalization in
machine learning models. However, using soft labels for training Deep Neural
Networks (DNNs) is not practical due to the costs involved in obtaining
multiple labels for large data sets. We propose soft label
memorization-generalization (SLMG), a fine-tuning approach to using soft labels
for training DNNs. We assume that differences in labels provided by human
annotators represent ambiguity about the true label instead of noise.
Experiments with SLMG demonstrate improved generalization performance on the
Natural Language Inference (NLI) task. Our experiments show that by injecting a
small percentage of soft label training data (0.03% of training set size) we
can improve generalization performance over several baselines. | http://arxiv.org/abs/1702.08563v3 | http://arxiv.org/pdf/1702.08563v3.pdf | null | [
"John P. Lalor",
"Hao Wu",
"Hong Yu"
] | [
"Natural Language Inference"
] | 1,488,153,600,000 | [] | 78,096 |
45,691 | https://paperswithcode.com/paper/learning-semantic-script-knowledge-with-event | 1312.5198 | Learning Semantic Script Knowledge with Event Embeddings | Induction of common sense knowledge about prototypical sequences of events
has recently received much attention. Instead of inducing this knowledge in the
form of graphs, as in much of the previous work, in our method, distributed
representations of event realizations are computed based on distributed
representations of predicates and their arguments, and then these
representations are used to predict prototypical event orderings. The
parameters of the compositional process for computing the event representations
and the ranking component of the model are jointly estimated from texts. We
show that this approach results in a substantial boost in ordering performance
with respect to previous methods. | http://arxiv.org/abs/1312.5198v4 | http://arxiv.org/pdf/1312.5198v4.pdf | null | [
"Ashutosh Modi",
"Ivan Titov"
] | [
"Common Sense Reasoning"
] | 1,387,324,800,000 | [] | 161,855 |
193,091 | https://paperswithcode.com/paper/generic-semi-supervised-adversarial-subject | 2012.03682 | Generic Semi-Supervised Adversarial Subject Translation for Sensor-Based Human Activity Recognition | The performance of Human Activity Recognition (HAR) models, particularly deep neural networks, is highly contingent upon the availability of the massive amount of annotated training data which should be sufficiently labeled. Though, data acquisition and manual annotation in the HAR domain are prohibitively expensive due to skilled human resource requirements in both steps. Hence, domain adaptation techniques have been proposed to adapt the knowledge from the existing source of data. More recently, adversarial transfer learning methods have shown very promising results in image classification, yet limited for sensor-based HAR problems, which are still prone to the unfavorable effects of the imbalanced distribution of samples. This paper presents a novel generic and robust approach for semi-supervised domain adaptation in HAR, which capitalizes on the advantages of the adversarial framework to tackle the shortcomings, by leveraging knowledge from annotated samples exclusively from the source subject and unlabeled ones of the target subject. Extensive subject translation experiments are conducted on three large, middle, and small-size datasets with different levels of imbalance to assess the robustness and effectiveness of the proposed model to the scale as well as imbalance in the data. The results demonstrate the effectiveness of our proposed algorithms over state-of-the-art methods, which led in up to 13%, 4%, and 13% improvement of our high-level activities recognition metrics for Opportunity, LISSI, and PAMAP2 datasets, respectively. The LISSI dataset is the most challenging one owing to its less populated and imbalanced distribution. Compared to the SA-GAN adversarial domain adaptation method, the proposed approach enhances the final classification performance with an average of 7.5% for the three datasets, which emphasizes the effectiveness of micro-mini-batch training. | https://arxiv.org/abs/2012.03682v1 | https://arxiv.org/pdf/2012.03682v1.pdf | null | [
"Elnaz Soleimani",
"Ghazaleh Khodabandelou",
"Abdelghani Chibani",
"Yacine Amirat"
] | [
"Activity Recognition",
"Domain Adaptation",
"Human Activity Recognition",
"Image Classification",
"Transfer Learning"
] | 1,605,052,800,000 | [] | 49,729 |
255,201 | https://paperswithcode.com/paper/empathy-driven-arabic-conversational-chatbot | null | Empathy-driven Arabic Conversational Chatbot | Conversational models have witnessed a significant research interest in the last few years with the advancements in sequence generation models. A challenging aspect in developing human-like conversational models is enabling the sense of empathy in bots, making them infer emotions from the person they are interacting with. By learning to develop empathy, chatbot models are able to provide human-like, empathetic responses, thus making the human-machine interaction close to human-human interaction. Recent advances in English use complex encoder-decoder language models that require large amounts of empathetic conversational data. However, research has not produced empathetic bots for Arabic. Furthermore, there is a lack of Arabic conversational data labeled with empathy. To address these challenges, we create an Arabic conversational dataset that comprises empathetic responses. However, the dataset is not large enough to develop very complex encoder-decoder models. To address the limitation of data scale, we propose a special encoder-decoder composed of a Long Short-Term Memory (LSTM) Sequence-to-Sequence (Seq2Seq) with Attention. The experiments showed success of our proposed empathy-driven Arabic chatbot in generating empathetic responses with a perplexity of 38.6, an empathy score of 3.7, and a fluency score of 3.92. | https://aclanthology.org/2020.wanlp-1.6 | https://aclanthology.org/2020.wanlp-1.6.pdf | COLING (WANLP) 2020 12 | [
"Tarek Naous",
"Christian Hokayem",
"Hazem Hajj"
] | [
"Chatbot"
] | 1,606,780,800,000 | [] | 24,873 |
147,683 | https://paperswithcode.com/paper/enhanced-universal-dependency-parsing-with | 2006.01414 | Enhanced Universal Dependency Parsing with Second-Order Inference and Mixture of Training Data | This paper presents the system used in our submission to the \textit{IWPT 2020 Shared Task}. Our system is a graph-based parser with second-order inference. For the low-resource Tamil corpus, we specially mixed the training data of Tamil with other languages and significantly improved the performance of Tamil. Due to our misunderstanding of the submission requirements, we submitted graphs that are not connected, which makes our system only rank \textbf{6th} over 10 teams. However, after we fixed this problem, our system is 0.6 ELAS higher than the team that ranked \textbf{1st} in the official results. | https://arxiv.org/abs/2006.01414v3 | https://arxiv.org/pdf/2006.01414v3.pdf | WS 2020 7 | [
"Xinyu Wang",
"Yong Jiang",
"Kewei Tu"
] | [
"Dependency Parsing"
] | 1,591,056,000,000 | [] | 130,601 |
255,606 | https://paperswithcode.com/paper/a-risk-communication-event-detection-model | null | A Risk Communication Event Detection Model via Contrastive Learning | This paper presents a time-topic cohesive model describing the communication patterns on the coronavirus pandemic from three Asian countries. The strength of our model is two-fold. First, it detects contextualized events based on topical and temporal information via contrastive learning. Second, it can be applied to multiple languages, enabling a comparison of risk communication across cultures. We present a case study and discuss future implications of the proposed model. | https://aclanthology.org/2020.nlp4if-1.5 | https://aclanthology.org/2020.nlp4if-1.5.pdf | NLP4IF (COLING) 2020 12 | [
"Mingi Shin",
"Sungwon Han",
"Sungkyu Park",
"Meeyoung Cha"
] | [
"Contrastive Learning",
"Event Detection"
] | 1,606,780,800,000 | [] | 66,286 |
38,375 | https://paperswithcode.com/paper/within-brain-classification-for-brain-tumor | 1510.01344 | Within-Brain Classification for Brain Tumor Segmentation | Purpose: In this paper, we investigate a framework for interactive brain
tumor segmentation which, at its core, treats the problem of interactive brain
tumor segmentation as a machine learning problem.
Methods: This method has an advantage over typical machine learning methods
for this task where generalization is made across brains. The problem with
these methods is that they need to deal with intensity bias correction and
other MRI-specific noise. In this paper, we avoid these issues by approaching
the problem as one of within brain generalization. Specifically, we propose a
semi-automatic method that segments a brain tumor by training and generalizing
within that brain only, based on some minimum user interaction.
Conclusion: We investigate how adding spatial feature coordinates (i.e. $i$,
$j$, $k$) to the intensity features can significantly improve the performance
of different classification methods such as SVM, kNN and random forests. This
would only be possible within an interactive framework. We also investigate the
use of a more appropriate kernel and the adaptation of hyper-parameters
specifically for each brain.
Results: As a result of these experiments, we obtain an interactive method
whose results reported on the MICCAI-BRATS 2013 dataset are the second most
accurate compared to published methods, while using significantly less memory
and processing power than most state-of-the-art methods. | http://arxiv.org/abs/1510.01344v1 | http://arxiv.org/pdf/1510.01344v1.pdf | null | [
"Mohammad Havaei",
"Hugo Larochelle",
"Philippe Poulin",
"Pierre-Marc Jodoin"
] | [
"Brain Tumor Segmentation",
"Classification",
"Classification",
"Tumor Segmentation"
] | 1,444,003,200,000 | [
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] | 190,735 |
68,930 | https://paperswithcode.com/paper/reading-comprehension-with-graph-based | null | Reading Comprehension with Graph-based Temporal-Casual Reasoning | Complex questions in reading comprehension tasks require integrating information from multiple sentences. In this work, to answer such questions involving temporal and causal relations, we generate event graphs from text based on dependencies, and rank answers by aligning event graphs. In particular, the alignments are constrained by graph-based reasoning to ensure temporal and causal agreement. Our focused approach self-adaptively complements existing solutions; it is automatically triggered only when applicable. Experiments on RACE and MCTest show that state-of-the-art methods are notably improved by using our approach as an add-on. | https://aclanthology.org/C18-1069 | https://aclanthology.org/C18-1069.pdf | COLING 2018 8 | [
"Yawei Sun",
"Gong Cheng",
"Yuzhong Qu"
] | [
"Dependency Parsing",
"Reading Comprehension"
] | 1,533,081,600,000 | [] | 12,845 |
127,180 | https://paperswithcode.com/paper/event-outcome-prediction-using-sentiment | 1912.05066 | Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds | Sentiment Analysis of microblog feeds has attracted considerable interest in recent times. Most of the current work focuses on tweet sentiment classification. But not much work has been done to explore how reliable the opinions of the mass (crowd wisdom) in social network microblogs such as twitter are in predicting outcomes of certain events such as election debates. In this work, we investigate whether crowd wisdom is useful in predicting such outcomes and whether their opinions are influenced by the experts in the field. We work in the domain of multi-label classification to perform sentiment classification of tweets and obtain the opinion of the crowd. This learnt sentiment is then used to predict outcomes of events such as: US Presidential Debate winners, Grammy Award winners, Super Bowl Winners. We find that in most of the cases, the wisdom of the crowd does indeed match with that of the experts, and in cases where they don't (particularly in the case of debates), we see that the crowd's opinion is actually influenced by that of the experts. | https://arxiv.org/abs/1912.05066v1 | https://arxiv.org/pdf/1912.05066v1.pdf | null | [
"Rahul Radhakrishnan Iyer",
"Ronghuo Zheng",
"Yuezhang Li",
"Katia Sycara"
] | [
"Classification",
"Classification",
"Multi-Label Classification",
"Sentiment Analysis"
] | 1,576,022,400,000 | [] | 96,048 |
290,579 | https://paperswithcode.com/paper/m2n-mesh-movement-networks-for-pde-solvers | 2204.11188 | M2N: Mesh Movement Networks for PDE Solvers | Mainstream numerical Partial Differential Equation (PDE) solvers require discretizing the physical domain using a mesh. Mesh movement methods aim to improve the accuracy of the numerical solution by increasing mesh resolution where the solution is not well-resolved, whilst reducing unnecessary resolution elsewhere. However, mesh movement methods, such as the Monge-Ampere method, require the solution of auxiliary equations, which can be extremely expensive especially when the mesh is adapted frequently. In this paper, we propose to our best knowledge the first learning-based end-to-end mesh movement framework for PDE solvers. Key requirements of learning-based mesh movement methods are alleviating mesh tangling, boundary consistency, and generalization to mesh with different resolutions. To achieve these goals, we introduce the neural spline model and the graph attention network (GAT) into our models respectively. While the Neural-Spline based model provides more flexibility for large deformation, the GAT based model can handle domains with more complicated shapes and is better at performing delicate local deformation. We validate our methods on stationary and time-dependent, linear and non-linear equations, as well as regularly and irregularly shaped domains. Compared to the traditional Monge-Ampere method, our approach can greatly accelerate the mesh adaptation process, whilst achieving comparable numerical error reduction. | https://arxiv.org/abs/2204.11188v1 | https://arxiv.org/pdf/2204.11188v1.pdf | null | [
"Wenbin Song",
"Mingrui Zhang",
"Joseph G. Wallwork",
"Junpeng Gao",
"Zheng Tian",
"Fanglei Sun",
"Matthew D. Piggott",
"Junqing Chen",
"Zuoqiang Shi",
"Xiang Chen",
"Jun Wang"
] | [
"Graph Attention"
] | 1,650,758,400,000 | [
{
"code_snippet_url": null,
"description": "A **Graph Attention Network (GAT)** is a neural network architecture that operates on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods’ features, a GAT enables (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront.\r\n\r\nSee [here](https://docs.dgl.ai/en/0.4.x/tutorials/models/1_gnn/9_gat.html) for an explanation by DGL.",
"full_name": "Graph Attention Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "GAT",
"source_title": "Graph Attention Networks",
"source_url": "http://arxiv.org/abs/1710.10903v3"
}
] | 69,515 |
257,677 | https://paperswithcode.com/paper/mixacm-mixup-based-robustness-transfer-via | 2111.05073 | MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps | Deep neural networks are susceptible to adversarially crafted, small and imperceptible changes in the natural inputs. The most effective defense mechanism against these examples is adversarial training which constructs adversarial examples during training by iterative maximization of loss. The model is then trained to minimize the loss on these constructed examples. This min-max optimization requires more data, larger capacity models, and additional computing resources. It also degrades the standard generalization performance of a model. Can we achieve robustness more efficiently? In this work, we explore this question from the perspective of knowledge transfer. First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation. Second, we propose a novel robustness transfer method called Mixup-Based Activated Channel Maps (MixACM) Transfer. MixACM transfers robustness from a robust teacher to a student by matching activated channel maps generated without expensive adversarial perturbations. Finally, extensive experiments on multiple datasets and different learning scenarios show our method can transfer robustness while also improving generalization on natural images. | https://arxiv.org/abs/2111.05073v1 | https://arxiv.org/pdf/2111.05073v1.pdf | NeurIPS 2021 12 | [
"Muhammad Awais",
"Fengwei Zhou",
"Chuanlong Xie",
"Jiawei Li",
"Sung-Ho Bae",
"Zhenguo Li"
] | [
"Transfer Learning"
] | 1,636,416,000,000 | [
{
"code_snippet_url": "https://github.com/facebookresearch/mixup-cifar10",
"description": "**Mixup** is a data augmentation technique that that generates a weighted combinations of random image pairs from the training data. Given two images and their ground truth labels: $\\left(x\\_{i}, y\\_{i}\\right), \\left(x\\_{j}, y\\_{j}\\right)$, a synthetic training example $\\left(\\hat{x}, \\hat{y}\\right)$ is generated as:\r\n\r\n$$ \\hat{x} = \\lambda{x\\_{i}} + \\left(1 − \\lambda\\right){x\\_{j}} $$\r\n$$ \\hat{y} = \\lambda{y\\_{i}} + \\left(1 − \\lambda\\right){y\\_{j}} $$\r\n\r\nwhere $\\lambda \\sim \\text{Beta}\\left(\\alpha = 0.2\\right)$ is independently sampled for each augmented example.",
"full_name": "Mixup",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.",
"name": "Image Data Augmentation",
"parent": null
},
"name": "Mixup",
"source_title": "mixup: Beyond Empirical Risk Minimization",
"source_url": "http://arxiv.org/abs/1710.09412v2"
}
] | 130,789 |
270,970 | https://paperswithcode.com/paper/multi-domain-joint-training-for-person-re | 2201.01983 | Multi-Domain Joint Training for Person Re-Identification | Deep learning-based person Re-IDentification (ReID) often requires a large amount of training data to achieve good performance. Thus it appears that collecting more training data from diverse environments tends to improve the ReID performance. This paper re-examines this common belief and makes a somehow surprising observation: using more samples, i.e., training with samples from multiple datasets, does not necessarily lead to better performance by using the popular ReID models. In some cases, training with more samples may even hurt the performance of the evaluation is carried out in one of those datasets. We postulate that this phenomenon is due to the incapability of the standard network in adapting to diverse environments. To overcome this issue, we propose an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters can be adaptive to various factors. Specifically, we consider the internal domain-related factor that can be identified from the input features, and external domain-related factors, such as domain information or camera information. Our discovery is that training with such an adaptive model can better benefit from more training samples. Experimental results show that our DCSD can greatly boost the performance (up to 12.3%) while joint training in multiple datasets. | https://arxiv.org/abs/2201.01983v1 | https://arxiv.org/pdf/2201.01983v1.pdf | null | [
"Lu Yang",
"Lingqiao Liu",
"Yunlong Wang",
"Peng Wang",
"Yanning Zhang"
] | [
"Person Re-Identification"
] | 1,641,427,200,000 | [] | 91,626 |
252,565 | https://paperswithcode.com/paper/intrusion-detection-machine-learning-baseline | 2111.02378 | Intrusion Detection: Machine Learning Baseline Calculations for Image Classification | Cyber security can be enhanced through application of machine learning by recasting network attack data into an image format, then applying supervised computer vision and other machine learning techniques to detect malicious specimens. Exploratory data analysis reveals little correlation and few distinguishing characteristics between the ten classes of malware used in this study. A general model comparison demonstrates that the most promising candidates for consideration are Light Gradient Boosting Machine, Random Forest Classifier, and Extra Trees Classifier. Convolutional networks fail to deliver their outstanding classification ability, being surpassed by a simple, fully connected architecture. Most tests fail to break 80% categorical accuracy and present low F1 scores, indicating more sophisticated approaches (e.g., bootstrapping, random samples, and feature selection) may be required to maximize performance. | https://arxiv.org/abs/2111.02378v1 | https://arxiv.org/pdf/2111.02378v1.pdf | null | [
"Erik Larsen",
"Korey MacVittie",
"John Lilly"
] | [
"Classification",
"Image Classification",
"Intrusion Detection"
] | 1,635,897,600,000 | [] | 150,610 |
16,285 | https://paperswithcode.com/paper/two-stage-algorithm-for-fairness-aware | 1710.04924 | Two-stage Algorithm for Fairness-aware Machine Learning | Algorithmic decision making process now affects many aspects of our lives.
Standard tools for machine learning, such as classification and regression, are
subject to the bias in data, and thus direct application of such off-the-shelf
tools could lead to a specific group being unfairly discriminated. Removing
sensitive attributes of data does not solve this problem because a
\textit{disparate impact} can arise when non-sensitive attributes and sensitive
attributes are correlated. Here, we study a fair machine learning algorithm
that avoids such a disparate impact when making a decision. Inspired by the
two-stage least squares method that is widely used in the field of economics,
we propose a two-stage algorithm that removes bias in the training data. The
proposed algorithm is conceptually simple. Unlike most of existing fair
algorithms that are designed for classification tasks, the proposed method is
able to (i) deal with regression tasks, (ii) combine explanatory attributes to
remove reverse discrimination, and (iii) deal with numerical sensitive
attributes. The performance and fairness of the proposed algorithm are
evaluated in simulations with synthetic and real-world datasets. | http://arxiv.org/abs/1710.04924v1 | http://arxiv.org/pdf/1710.04924v1.pdf | null | [
"Junpei Komiyama",
"Hajime Shimao"
] | [
"Fairness",
"Classification"
] | 1,507,852,800,000 | [] | 32,017 |
77,605 | https://paperswithcode.com/paper/scalable-realistic-recommendation-datasets | 1901.08910 | Scalable Realistic Recommendation Datasets through Fractal Expansions | Recommender System research suffers currently from a disconnect between the
size of academic data sets and the scale of industrial production systems. In
order to bridge that gap we propose to generate more massive user/item
interaction data sets by expanding pre-existing public data sets. User/item
incidence matrices record interactions between users and items on a given
platform as a large sparse matrix whose rows correspond to users and whose
columns correspond to items. Our technique expands such matrices to larger
numbers of rows (users), columns (items) and non zero values (interactions)
while preserving key higher order statistical properties. We adapt the
Kronecker Graph Theory to user/item incidence matrices and show that the
corresponding fractal expansions preserve the fat-tailed distributions of user
engagements, item popularity and singular value spectra of user/item
interaction matrices. Preserving such properties is key to building large
realistic synthetic data sets which in turn can be employed reliably to
benchmark Recommender Systems and the systems employed to train them. We
provide algorithms to produce such expansions and apply them to the MovieLens
20 million data set comprising 20 million ratings of 27K movies by 138K users.
The resulting expanded data set has 10 billion ratings, 864K items and 2
million users in its smaller version and can be scaled up or down. A larger
version features 655 billion ratings, 7 million items and 17 million users. | http://arxiv.org/abs/1901.08910v3 | http://arxiv.org/pdf/1901.08910v3.pdf | null | [
"Francois Belletti",
"Karthik Lakshmanan",
"Walid Krichene",
"Yi-fan Chen",
"John Anderson"
] | [
"Recommendation Systems"
] | 1,548,201,600,000 | [] | 97,812 |
294,612 | https://paperswithcode.com/paper/sainet-stereo-aware-inpainting-behind-objects | 2205.07014 | SaiNet: Stereo aware inpainting behind objects with generative networks | In this work, we present an end-to-end network for stereo-consistent image inpainting with the objective of inpainting large missing regions behind objects. The proposed model consists of an edge-guided UNet-like network using Partial Convolutions. We enforce multi-view stereo consistency by introducing a disparity loss. More importantly, we develop a training scheme where the model is learned from realistic stereo masks representing object occlusions, instead of the more common random masks. The technique is trained in a supervised way. Our evaluation shows competitive results compared to previous state-of-the-art techniques. | https://arxiv.org/abs/2205.07014v1 | https://arxiv.org/pdf/2205.07014v1.pdf | null | [
"Violeta Menéndez González",
"Andrew Gilbert",
"Graeme Phillipson",
"Stephen Jolly",
"Simon Hadfield"
] | [
"Image Inpainting"
] | 1,652,486,400,000 | [
{
"code_snippet_url": "",
"description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.",
"full_name": "Inpainting",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.",
"name": "Self-Supervised Learning",
"parent": null
},
"name": "Inpainting",
"source_title": "Context Encoders: Feature Learning by Inpainting",
"source_url": "http://arxiv.org/abs/1604.07379v2"
}
] | 105,083 |
48,071 | https://paperswithcode.com/paper/constrained-bayesian-inference-for-low-rank | 1309.6840 | Constrained Bayesian Inference for Low Rank Multitask Learning | We present a novel approach for constrained Bayesian inference. Unlike
current methods, our approach does not require convexity of the constraint set.
We reduce the constrained variational inference to a parametric optimization
over the feasible set of densities and propose a general recipe for such
problems. We apply the proposed constrained Bayesian inference approach to
multitask learning subject to rank constraints on the weight matrix. Further,
constrained parameter estimation is applied to recover the sparse conditional
independence structure encoded by prior precision matrices. Our approach is
motivated by reverse inference for high dimensional functional neuroimaging, a
domain where the high dimensionality and small number of examples requires the
use of constraints to ensure meaningful and effective models. For this
application, we propose a model that jointly learns a weight matrix and the
prior inverse covariance structure between different tasks. We present
experimental validation showing that the proposed approach outperforms strong
baseline models in terms of predictive performance and structure recovery. | http://arxiv.org/abs/1309.6840v1 | http://arxiv.org/pdf/1309.6840v1.pdf | null | [
"Oluwasanmi Koyejo",
"Joydeep Ghosh"
] | [
"Bayesian Inference",
"Variational Inference"
] | 1,380,153,600,000 | [] | 150,000 |
132,863 | https://paperswithcode.com/paper/sparseids-learning-packet-sampling-with | 2002.03872 | SparseIDS: Learning Packet Sampling with Reinforcement Learning | Recurrent Neural Networks (RNNs) have been shown to be valuable for constructing Intrusion Detection Systems (IDSs) for network data. They allow determining if a flow is malicious or not already before it is over, making it possible to take action immediately. However, considering the large number of packets that has to be inspected, for example in cloud/fog and edge computing, the question of computational efficiency arises. We show that by using a novel Reinforcement Learning (RL)-based approach called SparseIDS, we can reduce the number of consumed packets by more than three fourths while keeping classification accuracy high. To minimize the computational expenses of the RL-based sampling we show that a shared neural network can be used for both the classifier and the RL logic. Thus, no additional resources are consumed by the sampling in deployment. Comparing to various other sampling techniques, SparseIDS consistently achieves higher classification accuracy by learning to sample only relevant packets. A major novelty of our RL-based approach is that it can not only skip up to a predefined maximum number of samples like other approaches proposed in the domain of Natural Language Processing but can even skip arbitrarily many packets in one step. This enables saving even more computational resources for long sequences. Inspecting SparseIDS's behavior of choosing packets shows that it adopts different sampling strategies for different attack types and network flows. Finally we build an automatic steering mechanism that can guide SparseIDS in deployment to achieve a desired level of sparsity. | https://arxiv.org/abs/2002.03872v3 | https://arxiv.org/pdf/2002.03872v3.pdf | null | [
"Maximilian Bachl",
"Fares Meghdouri",
"Joachim Fabini",
"Tanja Zseby"
] | [
"Edge-computing",
"Classification",
"Intrusion Detection",
"reinforcement-learning"
] | 1,581,292,800,000 | [] | 195,861 |
103,371 | https://paperswithcode.com/paper/190600639 | 1906.00639 | BAYHENN: Combining Bayesian Deep Learning and Homomorphic Encryption for Secure DNN Inference | Recently, deep learning as a service (DLaaS) has emerged as a promising way to facilitate the employment of deep neural networks (DNNs) for various purposes. However, using DLaaS also causes potential privacy leakage from both clients and cloud servers. This privacy issue has fueled the research interests on the privacy-preserving inference of DNN models in the cloud service. In this paper, we present a practical solution named BAYHENN for secure DNN inference. It can protect both the client's privacy and server's privacy at the same time. The key strategy of our solution is to combine homomorphic encryption and Bayesian neural networks. Specifically, we use homomorphic encryption to protect a client's raw data and use Bayesian neural networks to protect the DNN weights in a cloud server. To verify the effectiveness of our solution, we conduct experiments on MNIST and a real-life clinical dataset. Our solution achieves consistent latency decreases on both tasks. In particular, our method can outperform the best existing method (GAZELLE) by about 5x, in terms of end-to-end latency. | https://arxiv.org/abs/1906.00639v2 | https://arxiv.org/pdf/1906.00639v2.pdf | null | [
"Peichen Xie",
"Bingzhe Wu",
"Guangyu Sun"
] | [
"Privacy Preserving"
] | 1,559,520,000,000 | [] | 45,766 |
304,566 | https://paperswithcode.com/paper/simultaneous-contact-rich-grasping-and | 2207.01418 | Simultaneous Contact-Rich Grasping and Locomotion via Distributed Optimization Enabling Free-Climbing for Multi-Limbed Robots | While motion planning of locomotion for legged robots has shown great success, motion planning for legged robots with dexterous multi-finger grasping is not mature yet. We present an efficient motion planning framework for simultaneously solving locomotion (e.g., centroidal dynamics), grasping (e.g., patch contact), and contact (e.g., gait) problems. To accelerate the planning process, we propose distributed optimization frameworks based on Alternating Direction Methods of Multipliers (ADMM) to solve the original large-scale Mixed-Integer NonLinear Programming (MINLP). The resulting frameworks use Mixed-Integer Quadratic Programming (MIQP) to solve contact and NonLinear Programming (NLP) to solve nonlinear dynamics, which are more computationally tractable and less sensitive to parameters. Also, we explicitly enforce patch contact constraints from limit surfaces with micro-spine grippers. We demonstrate our proposed framework in the hardware experiments, showing that the multi-limbed robot is able to realize various motions including free-climbing at a slope angle 45{\deg} with a much shorter planning time. | https://arxiv.org/abs/2207.01418v2 | https://arxiv.org/pdf/2207.01418v2.pdf | null | [
"Yuki Shirai",
"Xuan Lin",
"Alexander Schperberg",
"Yusuke Tanaka",
"Hayato Kato",
"Varit Vichathorn",
"Dennis Hong"
] | [
"Distributed Optimization",
"Motion Planning"
] | 1,656,892,800,000 | [] | 80,756 |
149,793 | https://paperswithcode.com/paper/improving-gan-training-with-probability-ratio | 2006.06900 | Improving GAN Training with Probability Ratio Clipping and Sample Reweighting | Despite success on a wide range of problems related to vision, generative adversarial networks (GANs) often suffer from inferior performance due to unstable training, especially for text generation. To solve this issue, we propose a new variational GAN training framework which enjoys superior training stability. Our approach is inspired by a connection of GANs and reinforcement learning under a variational perspective. The connection leads to (1) probability ratio clipping that regularizes generator training to prevent excessively large updates, and (2) a sample re-weighting mechanism that improves discriminator training by downplaying bad-quality fake samples. Moreover, our variational GAN framework can provably overcome the training issue in many GANs that an optimal discriminator cannot provide any informative gradient to training generator. By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks, including text generation, text style transfer, and image generation. | https://arxiv.org/abs/2006.06900v4 | https://arxiv.org/pdf/2006.06900v4.pdf | NeurIPS 2020 12 | [
"Yue Wu",
"Pan Zhou",
"Andrew Gordon Wilson",
"Eric P. Xing",
"Zhiting Hu"
] | [
"Image Generation",
"Style Transfer",
"Text Generation",
"Text Style Transfer"
] | 1,591,920,000,000 | [] | 53,230 |
43,429 | https://paperswithcode.com/paper/optical-character-recognition-using-k-nearest | 1411.1442 | Optical Character Recognition, Using K-Nearest Neighbors | The problem of optical character recognition, OCR, has been widely discussed
in the literature. Having a hand-written text, the program aims at recognizing
the text. Even though there are several approaches to this issue, it is still
an open problem. In this paper we would like to propose an approach that uses
K-nearest neighbors algorithm, and has the accuracy of more than 90%. The
training and run time is also very short. | http://arxiv.org/abs/1411.1442v1 | http://arxiv.org/pdf/1411.1442v1.pdf | null | [
"Wei Wang"
] | [
"Optical Character Recognition"
] | 1,415,145,600,000 | [] | 131,802 |
107,296 | https://paperswithcode.com/paper/learning-discriminative-features-using-center | 1906.08873 | Learning Discriminative features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition | This paper proposes a Convolutional Neural Network (CNN) inspired by Multitask Learning (MTL) and based on speech features trained under the joint supervision of softmax loss and center loss, a powerful metric learning strategy, for the recognition of emotion in speech. Speech features such as Spectrograms and Mel-frequency Cepstral Coefficient s (MFCCs) help retain emotion-related low-level characteristics in speech. We experimented with several Deep Neural Network (DNN) architectures that take in speech features as input and trained them under both softmax and center loss, which resulted in highly discriminative features ideal for Speech Emotion Recognition (SER). Our networks also employ a regularizing effect by simultaneously performing the auxiliary task of reconstructing the input speech features. This sharing of representations among related tasks enables our network to better generalize the original task of SER. Some of our proposed networks contain far fewer parameters when compared to state-of-the-art architectures. | https://arxiv.org/abs/1906.08873v2 | https://arxiv.org/pdf/1906.08873v2.pdf | null | [
"Suraj Tripathi",
"Abhiram Ramesh",
"Abhay Kumar",
"Chirag Singh",
"Promod Yenigalla"
] | [
"Emotion Recognition",
"Metric Learning",
"Speech Emotion Recognition"
] | 1,560,902,400,000 | [
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] | 81,858 |
113,425 | https://paperswithcode.com/paper/channel-wise-pruning-of-neural-networks-with | 1812.07060 | Channel-wise pruning of neural networks with tapering resource constraint | Neural network pruning is an important step in design process of efficient neural networks for edge devices with limited computational power. Pruning is a form of knowledge transfer from the weights of the original network to a smaller target subnetwork. We propose a new method for compute-constrained structured channel-wise pruning of convolutional neural networks. The method iteratively fine-tunes the network, while gradually tapering the computation resources available to the pruned network via a holonomic constraint in the method of Lagrangian multipliers framework. An explicit and adaptive automatic control over the rate of tapering is provided. The trainable parameters of our pruning method are separate from the weights of the neural network, which allows us to avoid the interference with the neural network solver (e.g. avoid the direct dependence of pruning speed on neural network learning rates). Our method combines the `rigoristic' approach by the direct application of constrained optimization, avoiding the pitfalls of ADMM-based methods, like their need to define the target amount of resources for each pruning run, and direct dependence of pruning speed and priority of pruning on the relative scale of weights between layers. For VGG-16 @ ILSVRC-2012, we achieve reduction of 15.47 -> 3.87 GMAC with only 1% top-1 accuracy reduction (68.4% -> 67.4%). For AlexNet @ ILSVRC-2012, we achieve 0.724 -> 0.411 GMAC with 1% top-1 accuracy reduction (56.8% -> 55.8%). | https://arxiv.org/abs/1812.07060v1 | https://arxiv.org/pdf/1812.07060v1.pdf | null | [
"Alexey Kruglov"
] | [
"Network Pruning",
"Transfer Learning"
] | 1,543,881,600,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": "VGG-16",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "VGG-16",
"source_title": "Very Deep Convolutional Networks for Large-Scale Image Recognition",
"source_url": "http://arxiv.org/abs/1409.1556v6"
},
{
"code_snippet_url": "https://www.healthnutra.org/es/maxup/",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/normalization.py#L13",
"description": "**Local Response Normalization** is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.\r\n\r\n$$ b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n}\\sum_{c'=\\max(0, c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta} $$\r\n\r\nWhere the size is the number of neighbouring channels used for normalization, $\\alpha$ is multiplicative factor, $\\beta$ an exponent and $k$ an additive factor",
"full_name": "Local Response Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Local Response Normalization",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42",
"description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.",
"full_name": "Grouped Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Grouped Convolution",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/dansuh17/alexnet-pytorch/blob/d0c1b1c52296ffcbecfbf5b17e1d1685b4ca6744/model.py#L40",
"description": "**AlexNet** is a classic convolutional neural network architecture. It consists of convolutions, [max pooling](https://paperswithcode.com/method/max-pooling) and dense layers as the basic building blocks. Grouped convolutions are used in order to fit the model across two GPUs.",
"full_name": "AlexNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "AlexNet",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
}
] | 170,420 |
17,431 | https://paperswithcode.com/paper/general-phase-regularized-reconstruction | 1709.05374 | General Phase Regularized Reconstruction using Phase Cycling | Purpose: To develop a general phase regularized image reconstruction method,
with applications to partial Fourier imaging, water-fat imaging and flow
imaging.
Theory and Methods: The problem of enforcing phase constraints in
reconstruction was studied under a regularized inverse problem framework. A
general phase regularized reconstruction algorithm was proposed to enable
various joint reconstruction of partial Fourier imaging, water-fat imaging and
flow imaging, along with parallel imaging (PI) and compressed sensing (CS).
Since phase regularized reconstruction is inherently non-convex and sensitive
to phase wraps in the initial solution, a reconstruction technique, named phase
cycling, was proposed to render the overall algorithm invariant to phase wraps.
The proposed method was applied to retrospectively under-sampled in vivo
datasets and compared with state of the art reconstruction methods.
Results: Phase cycling reconstructions showed reduction of artifacts compared
to reconstructions with- out phase cycling and achieved similar performances as
state of the art results in partial Fourier, water-fat and divergence-free
regularized flow reconstruction. Joint reconstruction of partial Fourier +
water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized
flow imaging + PI + CS were demonstrated.
Conclusion: The proposed phase cycling reconstruction provides an alternative
way to perform phase regularized reconstruction, without the need to perform
phase unwrapping. It is robust to the choice of initial solutions and
encourages the joint reconstruction of phase imaging applications. | http://arxiv.org/abs/1709.05374v1 | http://arxiv.org/pdf/1709.05374v1.pdf | null | [
"Frank Ong",
"Joseph Cheng",
"Michael Lustig"
] | [
"Image Reconstruction"
] | 1,505,433,600,000 | [] | 59,491 |
266,826 | https://paperswithcode.com/paper/development-of-a-robust-cascaded-architecture | 2112.03001 | Development of a robust cascaded architecture for intelligent robot grasping using limited labelled data | Grasping objects intelligently is a challenging task even for humans and we spend a considerable amount of time during our childhood to learn how to grasp objects correctly. In the case of robots, we can not afford to spend that much time on making it to learn how to grasp objects effectively. Therefore, in the present research we propose an efficient learning architecture based on VQVAE so that robots can be taught with sufficient data corresponding to correct grasping. However, getting sufficient labelled data is extremely difficult in the robot grasping domain. To help solve this problem, a semi-supervised learning based model which has much more generalization capability even with limited labelled data set, has been investigated. Its performance shows 6\% improvement when compared with existing state-of-the-art models including our earlier model. During experimentation, It has been observed that our proposed model, RGGCNN2, performs significantly better, both in grasping isolated objects as well as objects in a cluttered environment, compared to the existing approaches which do not use unlabelled data for generating grasping rectangles. To the best of our knowledge, developing an intelligent robot grasping model (based on semi-supervised learning) trained through representation learning and exploiting the high-quality learning ability of GGCNN2 architecture with the limited number of labelled dataset together with the learned latent embeddings, can be used as a de-facto training method which has been established and also validated in this paper through rigorous hardware experimentations using Baxter (Anukul) research robot. | https://arxiv.org/abs/2112.03001v1 | https://arxiv.org/pdf/2112.03001v1.pdf | null | [
"Priya Shukla",
"Vandana Kushwaha",
"G. C. Nandi"
] | [
"Representation Learning"
] | 1,636,156,800,000 | [] | 29,610 |
254,361 | https://paperswithcode.com/paper/less-is-more-domain-adaptation-with-lottery | null | Less Is More: Domain Adaptation with Lottery Ticket for Reading Comprehension | In this paper, we propose a simple few-shot domain adaptation paradigm for reading comprehension. We first identify the lottery subnetwork structure within the Transformer-based source domain model via gradual magnitude pruning. Then, we only fine-tune the lottery subnetwork, a small fraction of the whole parameters, on the annotated target domain data for adaptation. To obtain more adaptable subnetworks, we introduce self-attention attribution to weigh parameters, beyond simply pruning the smallest magnitude parameters, which can be seen as combining structured pruning and unstructured magnitude pruning softly. Experimental results show that our method outperforms the full model fine-tuning adaptation on four out of five domains when only a small amount of annotated data available for adaptation. Moreover, introducing self-attention attribution reserves more parameters for important attention heads in the lottery subnetwork and improves the target domain model performance. Our further analyses reveal that, besides exploiting fewer parameters, the choice of subnetworks is critical to the effectiveness. | https://aclanthology.org/2021.findings-emnlp.95 | https://aclanthology.org/2021.findings-emnlp.95.pdf | Findings (EMNLP) 2021 11 | [
"Haichao Zhu",
"Zekun Wang",
"Heng Zhang",
"Ming Liu",
"Sendong Zhao",
"Bing Qin"
] | [
"Domain Adaptation",
"Reading Comprehension"
] | 1,635,724,800,000 | [] | 116,517 |
13,158 | https://paperswithcode.com/paper/deep-metric-learning-for-multi-labelled | 1712.07682 | Deep metric learning for multi-labelled radiographs | Many radiological studies can reveal the presence of several co-existing
abnormalities, each one represented by a distinct visual pattern. In this
article we address the problem of learning a distance metric for plain
radiographs that captures a notion of "radiological similarity": two chest
radiographs are considered to be similar if they share similar abnormalities.
Deep convolutional neural networks (DCNs) are used to learn a low-dimensional
embedding for the radiographs that is equipped with the desired metric. Two
loss functions are proposed to deal with multi-labelled images and potentially
noisy labels. We report on a large-scale study involving over 745,000 chest
radiographs whose labels were automatically extracted from free-text
radiological reports through a natural language processing system. Using 4,500
validated exams, we demonstrate that the methodology performs satisfactorily on
clustering and image retrieval tasks. Remarkably, the learned metric separates
normal exams from those having radiological abnormalities. | http://arxiv.org/abs/1712.07682v1 | http://arxiv.org/pdf/1712.07682v1.pdf | null | [
"Mauro Annarumma",
"Giovanni Montana"
] | [
"Image Retrieval",
"Metric Learning"
] | 1,512,950,400,000 | [] | 41,577 |
132 | https://paperswithcode.com/paper/latent-convolutional-models | 1806.06284 | Latent Convolutional Models | We present a new latent model of natural images that can be learned on
large-scale datasets. The learning process provides a latent embedding for
every image in the training dataset, as well as a deep convolutional network
that maps the latent space to the image space. After training, the new model
provides a strong and universal image prior for a variety of image restoration
tasks such as large-hole inpainting, superresolution, and colorization. To
model high-resolution natural images, our approach uses latent spaces of very
high dimensionality (one to two orders of magnitude higher than previous latent
image models). To tackle this high dimensionality, we use latent spaces with a
special manifold structure (convolutional manifolds) parameterized by a ConvNet
of a certain architecture. In the experiments, we compare the learned latent
models with latent models learned by autoencoders, advanced variants of
generative adversarial networks, and a strong baseline system using simpler
parameterization of the latent space. Our model outperforms the competing
approaches over a range of restoration tasks. | http://arxiv.org/abs/1806.06284v2 | http://arxiv.org/pdf/1806.06284v2.pdf | ICLR 2019 5 | [
"ShahRukh Athar",
"Evgeny Burnaev",
"Victor Lempitsky"
] | [
"Colorization",
"Image Restoration"
] | 1,529,107,200,000 | [] | 118,110 |
115,956 | https://paperswithcode.com/paper/aituning-machine-learning-based-tuning-tool | 1909.06301 | AITuning: Machine Learning-based Tuning Tool for Run-Time Communication Libraries | In this work, we address the problem of tuning communication libraries by using a deep reinforcement learning approach. Reinforcement learning is a machine learning technique incredibly effective in solving game-like situations. In fact, tuning a set of parameters in a communication library in order to get better performance in a parallel application can be expressed as a game: Find the right combination/path that provides the best reward. Even though AITuning has been designed to be utilized with different run-time libraries, we focused this work on applying it to the OpenCoarrays run-time communication library, built on top of MPI-3. This work not only shows the potential of using a reinforcement learning algorithm for tuning communication libraries, but also demonstrates how the MPI Tool Information Interface, introduced by the MPI-3 standard, can be used effectively by run-time libraries to improve the performance without human intervention. | https://arxiv.org/abs/1909.06301v1 | https://arxiv.org/pdf/1909.06301v1.pdf | null | [
"Alessandro Fanfarillo",
"Davide Del Vento"
] | [
"reinforcement-learning"
] | 1,568,332,800,000 | [] | 18,851 |
149,529 | https://paperswithcode.com/paper/latent-transformations-for-discrete-data | 2006.06346 | Latent Transformations for Discrete-Data Normalising Flows | Normalising flows (NFs) for discrete data are challenging because parameterising bijective transformations of discrete variables requires predicting discrete/integer parameters. Having a neural network architecture predict discrete parameters takes a non-differentiable activation function (eg, the step function) which precludes gradient-based learning. To circumvent this non-differentiability, previous work has employed biased proxy gradients, such as the straight-through estimator. We present an unbiased alternative where rather than deterministically parameterising one transformation, we predict a distribution over latent transformations. With stochastic transformations, the marginal likelihood of the data is differentiable and gradient-based learning is possible via score function estimation. To test the viability of discrete-data NFs we investigate performance on binary MNIST. We observe great challenges with both deterministic proxy gradients and unbiased score function estimation. Whereas the former often fails to learn even a shallow transformation, the variance of the latter could not be sufficiently controlled to admit deeper NFs. | https://arxiv.org/abs/2006.06346v1 | https://arxiv.org/pdf/2006.06346v1.pdf | null | [
"Rob Hesselink",
"Wilker Aziz"
] | [
"Normalising Flows"
] | 1,591,833,600,000 | [] | 13,659 |
154,229 | https://paperswithcode.com/paper/predictive-maintenance-for-edge-based-sensor | 2007.03313 | Predictive Maintenance for Edge-Based Sensor Networks: A Deep Reinforcement Learning Approach | Failure of mission-critical equipment interrupts production and results in monetary loss. The risk of unplanned equipment downtime can be minimized through Predictive Maintenance of revenue generating assets to ensure optimal performance and safe operation of equipment. However, the increased sensorization of the equipment generates a data deluge, and existing machine-learning based predictive model alone becomes inadequate for timely equipment condition predictions. In this paper, a model-free Deep Reinforcement Learning algorithm is proposed for predictive equipment maintenance from an equipment-based sensor network context. Within each equipment, a sensor device aggregates raw sensor data, and the equipment health status is analyzed for anomalous events. Unlike traditional black-box regression models, the proposed algorithm self-learns an optimal maintenance policy and provides actionable recommendation for each equipment. Our experimental results demonstrate the potential for broader range of equipment maintenance applications as an automatic learning framework. | https://arxiv.org/abs/2007.03313v1 | https://arxiv.org/pdf/2007.03313v1.pdf | null | [
"Kevin Shen Hoong Ong",
"Dusit Niyato",
"Chau Yuen"
] | [
"reinforcement-learning"
] | 1,594,080,000,000 | [] | 160,482 |
269,319 | https://paperswithcode.com/paper/active-learning-of-quantum-system | 2112.14553 | Active Learning of Quantum System Hamiltonians yields Query Advantage | Hamiltonian learning is an important procedure in quantum system identification, calibration, and successful operation of quantum computers. Through queries to the quantum system, this procedure seeks to obtain the parameters of a given Hamiltonian model and description of noise sources. Standard techniques for Hamiltonian learning require careful design of queries and $O(\epsilon^{-2})$ queries in achieving learning error $\epsilon$ due to the standard quantum limit. With the goal of efficiently and accurately estimating the Hamiltonian parameters within learning error $\epsilon$ through minimal queries, we introduce an active learner that is given an initial set of training examples and the ability to interactively query the quantum system to generate new training data. We formally specify and experimentally assess the performance of this Hamiltonian active learning (HAL) algorithm for learning the six parameters of a two-qubit cross-resonance Hamiltonian on four different superconducting IBM Quantum devices. Compared with standard techniques for the same problem and a specified learning error, HAL achieves up to a $99.8\%$ reduction in queries required, and a $99.1\%$ reduction over the comparable non-adaptive learning algorithm. Moreover, with access to prior information on a subset of Hamiltonian parameters and given the ability to select queries with linearly (or exponentially) longer system interaction times during learning, HAL can exceed the standard quantum limit and achieve Heisenberg (or super-Heisenberg) limited convergence rates during learning. | https://arxiv.org/abs/2112.14553v1 | https://arxiv.org/pdf/2112.14553v1.pdf | null | [
"Arkopal Dutt",
"Edwin Pednault",
"Chai Wah Wu",
"Sarah Sheldon",
"John Smolin",
"Lev Bishop",
"Isaac L. Chuang"
] | [
"Active Learning"
] | 1,640,736,000,000 | [] | 15,092 |
155,025 | https://paperswithcode.com/paper/fast-global-convergence-of-natural-policy | 2007.06558 | Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization | Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms in contemporary reinforcement learning. This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that encourages exploration -- and is closely related to soft policy iteration and trust region policy optimization. Despite the empirical success, the theoretical underpinnings for NPG methods remain limited even for the tabular setting. This paper develops $\textit{non-asymptotic}$ convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on discounted Markov decision processes (MDPs). Assuming access to exact policy evaluation, we demonstrate that the algorithm converges linearly -- or even quadratically once it enters a local region around the optimal policy -- when computing optimal value functions of the regularized MDP. Moreover, the algorithm is provably stable vis-\`a-vis inexactness of policy evaluation. Our convergence results accommodate a wide range of learning rates, and shed light upon the role of entropy regularization in enabling fast convergence. | https://arxiv.org/abs/2007.06558v5 | https://arxiv.org/pdf/2007.06558v5.pdf | null | [
"Shicong Cen",
"Chen Cheng",
"Yuxin Chen",
"Yuting Wei",
"Yuejie Chi"
] | [
"Policy Gradient Methods"
] | 1,594,598,400,000 | [
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/ikostrikov/pytorch-a3c/blob/48d95844755e2c3e2c7e48bbd1a7141f7212b63f/train.py#L100",
"description": "**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-policy policy gradient based methods like [A3C](https://paperswithcode.com/method/a3c), the same mutual reinforcement behaviour leads to a highly-peaked $\\pi\\left(a\\mid{s}\\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:\r\n\r\n$$H(X) = -\\sum\\pi\\left(x\\right)\\log\\left(\\pi\\left(x\\right)\\right) $$\r\n\r\nImage Credit: Wikipedia",
"full_name": "Entropy Regularization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Entropy Regularization",
"source_title": "Asynchronous Methods for Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1602.01783v2"
}
] | 24,347 |
276,213 | https://paperswithcode.com/paper/deformable-image-registration-uncertainty | null | Deformable Image Registration uncertainty quantification using deep learning for dose accumulation in adaptive proton therapy | Deformable image registration (DIR) is a key element in adaptive radiotherapy (AR) to include anatomical modifications in the adaptive planning. In AR, daily 3D images are acquired and DIR can be used for structure propagation and to deform the daily dose to a reference anatomy. Quantifying the uncertainty associated with DIR is essential. Here, a probabilistic unsupervised deep learning method is presented to predict the variance of a given deformable vector field (DVF). It is shown that the proposed method can predict the uncertainty associated with various conventional DIR algorithms for breathing deformation in the lung. In addition, we show that the uncertainty prediction is accurate also for DIR algorithms not used during the training. Finally, we demonstrate how the resulting DVFs can be used to estimate the dosimetric uncertainty arising from dose deformation. | https://openreview.net/forum?id=B0MxIXCh50Y | https://openreview.net/pdf?id=B0MxIXCh50Y | WBIR Workshop Biomedical_Imaging_Registration 2022 7 | [
"Anonymous"
] | [
"Image Registration"
] | 1,643,932,800,000 | [] | 155,439 |
177,393 | https://paperswithcode.com/paper/applying-convolutional-neural-networks-to-1 | 2011.14820 | Applying Convolutional Neural Networks to Data on Unstructured Meshes with Space-Filling Curves | This paper presents the first classical Convolutional Neural Network (CNN) that can be applied directly to data from unstructured finite element meshes or control volume grids. CNNs have been hugely influential in the areas of image classification and image compression, both of which typically deal with data on structured grids. Unstructured meshes are frequently used to solve partial differential equations and are particularly suitable for problems that require the mesh to conform to complex geometries or for problems that require variable mesh resolution. Central to the approach are space-filling curves, which traverse the nodes or cells of a mesh tracing out a path that is as short as possible (in terms of numbers of edges) and that visits each node or cell exactly once. The space-filling curves (SFCs) are used to find an ordering of the nodes or cells that can transform multi-dimensional solutions on unstructured meshes into a one-dimensional (1D) representation, to which 1D convolutional layers can then be applied. Although developed in two dimensions, the approach is applicable to higher dimensional problems. To demonstrate the approach, the network we choose is a convolutional autoencoder (CAE) although other types of CNN could be used. The approach is tested by applying CAEs to data sets that have been reordered with an SFC. Sparse layers are used at the input and output of the autoencoder, and the use of multiple SFCs is explored. We compare the accuracy of the SFC-based CAE with that of a classical CAE applied to two idealised problems on structured meshes, and then apply the approach to solutions of flow past a cylinder obtained using the finite-element method and an unstructured mesh. | https://arxiv.org/abs/2011.14820v2 | https://arxiv.org/pdf/2011.14820v2.pdf | null | [
"Claire E. Heaney",
"Yuling Li",
"Omar K. Matar",
"Christopher C. Pain"
] | [
"Image Classification",
"Image Compression"
] | 1,606,176,000,000 | [
{
"code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38",
"description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).\r\n\r\nImage: [Michael Massi](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)",
"full_name": "AutoEncoder",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "AutoEncoder",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] | 177,466 |
96,397 | https://paperswithcode.com/paper/network-slimming-by-slimmable-networks | 1903.11728 | AutoSlim: Towards One-Shot Architecture Search for Channel Numbers | We study how to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot solution, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs). Code and models will be available at: https://github.com/JiahuiYu/slimmable_networks | https://arxiv.org/abs/1903.11728v2 | https://arxiv.org/pdf/1903.11728v2.pdf | ICLR 2020 1 | [
"Jiahui Yu",
"Thomas Huang"
] | [
"Neural Architecture Search"
] | 1,553,644,800,000 | [
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/956b4ebab0bbf98de4e1548287df5197a3c7154e/pytorch/pytorchcv/models/mobilenet.py#L14",
"description": "**MobileNet** is a type of convolutional neural network designed for mobile and embedded vision applications. They are based on a streamlined architecture that uses depthwise separable convolutions to build lightweight deep neural networks that can have low latency for mobile and embedded devices.",
"full_name": "MobileNetV1",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "MobileNetV1",
"source_title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications",
"source_url": "http://arxiv.org/abs/1704.04861v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/mobilenet.py#L77",
"description": "**MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an inverted residual structure where the residual connections are between the bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. As a whole, the architecture of MobileNetV2 contains the initial fully [convolution](https://paperswithcode.com/method/convolution) layer with 32 filters, followed by 19 residual bottleneck layers.",
"full_name": "MobileNetV2",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "MobileNetV2",
"source_title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks",
"source_url": "http://arxiv.org/abs/1801.04381v4"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/resnet.py#L124",
"description": "**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}(x):=\\mathcal{H}(x)-x$. The original mapping is recast into $\\mathcal{F}(x)+x$.\r\n\r\nThere is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.",
"full_name": "Residual Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "ResNet",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Depthwise Convolution** is a type of convolution where we apply a single convolutional filter for each input channel. In the regular 2D [convolution](https://paperswithcode.com/method/convolution) performed over multiple input channels, the filter is as deep as the input and lets us freely mix channels to generate each element in the output. In contrast, depthwise convolutions keep each channel separate. To summarize the steps, we:\r\n\r\n1. Split the input and filter into channels.\r\n2. We convolve each input with the respective filter.\r\n3. We stack the convolved outputs together.\r\n\r\nImage Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)",
"full_name": "Depthwise Convolution",
"introduced_year": 2016,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Depthwise Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Pointwise Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) that uses a 1x1 kernel: a kernel that iterates through every single point. This kernel has a depth of however many channels the input image has. It can be used in conjunction with [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution) to produce an efficient class of convolutions known as [depthwise-separable convolutions](https://paperswithcode.com/method/depthwise-separable-convolution).\r\n\r\nImage Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)",
"full_name": "Pointwise Convolution",
"introduced_year": 2016,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Pointwise Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/68335927ba27f2356093b985bada0bc3989836b1/pytorch/pytorchcv/models/common.py#L731",
"description": "The **Squeeze-and-Excitation Block** is an architectural unit designed to improve the representational power of a network by enabling it to perform dynamic channel-wise feature recalibration. The process is:\r\n\r\n- The block has a convolutional block as an input.\r\n- Each channel is \"squeezed\" into a single numeric value using [average pooling](https://paperswithcode.com/method/average-pooling).\r\n- A dense layer followed by a [ReLU](https://paperswithcode.com/method/relu) adds non-linearity and output channel complexity is reduced by a ratio.\r\n- Another dense layer followed by a sigmoid gives each channel a smooth gating function.\r\n- Finally, we weight each feature map of the convolutional block based on the side network; the \"excitation\".",
"full_name": "Squeeze-and-Excitation Block",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Squeeze-and-Excitation Block",
"source_title": "Squeeze-and-Excitation Networks",
"source_url": "https://arxiv.org/abs/1709.01507v4"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/kwotsin/TensorFlow-Xception/blob/c42ad8cab40733f9150711be3537243278612b22/xception.py#L67",
"description": "While [standard convolution](https://paperswithcode.com/method/convolution) performs the channelwise and spatial-wise computation in one step, **Depthwise Separable Convolution** splits the computation into two steps: [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) applies a single convolutional filter per each input channel and [pointwise convolution](https://paperswithcode.com/method/pointwise-convolution) is used to create a linear combination of the output of the depthwise convolution. The comparison of standard convolution and depthwise separable convolution is shown to the right.\r\n\r\nCredit: [Depthwise Convolution Is All You Need for Learning Multiple Visual Domains](https://paperswithcode.com/paper/depthwise-convolution-is-all-you-need-for)",
"full_name": "Depthwise Separable Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Depthwise Separable Convolution",
"source_title": "Xception: Deep Learning With Depthwise Separable Convolutions",
"source_url": "http://openaccess.thecvf.com/content_cvpr_2017/html/Chollet_Xception_Deep_Learning_CVPR_2017_paper.html"
},
{
"code_snippet_url": "https://www.healthnutra.org/es/maxup/",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/mobilenet.py#L45",
"description": "An **Inverted Residual Block**, sometimes called an **MBConv Block**, is a type of residual block used for image models that uses an inverted structure for efficiency reasons. It was originally proposed for the [MobileNetV2](https://paperswithcode.com/method/mobilenetv2) CNN architecture. It has since been reused for several mobile-optimized CNNs.\r\n\r\nA traditional [Residual Block](https://paperswithcode.com/method/residual-block) has a wide -> narrow -> wide structure with the number of channels. The input has a high number of channels, which are compressed with a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution). The number of channels is then increased again with a 1x1 [convolution](https://paperswithcode.com/method/convolution) so input and output can be added. \r\n\r\nIn contrast, an Inverted Residual Block follows a narrow -> wide -> narrow approach, hence the inversion. We first widen with a 1x1 convolution, then use a 3x3 [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) (which greatly reduces the number of parameters), then we use a 1x1 convolution to reduce the number of channels so input and output can be added.",
"full_name": "Inverted Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Inverted Residual Block",
"source_title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks",
"source_url": "http://arxiv.org/abs/1801.04381v4"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/c03fa67de3c9e454e9b6d35fe9cbb6b15c28fda7/pytorch/pytorchcv/models/mnasnet.py#L161",
"description": "**MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile [neural architecture search](https://paperswithcode.com/method/neural-architecture-search), which explicitly incorporates model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. The main building block is an [inverted residual block](https://paperswithcode.com/method/inverted-residual-block) (from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2)).",
"full_name": "MnasNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "MnasNet",
"source_title": "MnasNet: Platform-Aware Neural Architecture Search for Mobile",
"source_url": "https://arxiv.org/abs/1807.11626v3"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] | 184,143 |
253,796 | https://paperswithcode.com/paper/like-chalk-and-cheese-on-the-effects-of | null | Like Chalk and Cheese? On the Effects of Translationese in MT Training | We revisit the topic of translation direction in the data used for training neural machine translation systems and focusing on a real-world scenario with known translation direction and imbalances in translation direction: the Canadian Hansard. According to automatic metrics and we observe that using parallel data that was produced in the “matching” translation direction (Authentic source and translationese target) improves translation quality. In cases of data imbalance in terms of translation direction and we find that tagging of translation direction can close the performance gap. We perform a human evaluation that differs slightly from the automatic metrics and but nevertheless confirms that for this French-English dataset that is known to contain high-quality translations and authentic or tagged mixed source improves over translationese source for training. | https://aclanthology.org/2021.mtsummit-research.9 | https://aclanthology.org/2021.mtsummit-research.9.pdf | MTSummit 2021 8 | [
"Samuel Larkin",
"Michel Simard",
"Rebecca Knowles"
] | [
"Machine Translation"
] | 1,627,776,000,000 | [] | 156,617 |
303,156 | https://paperswithcode.com/paper/you-can-t-fix-what-you-can-t-measure | 2206.12183 | "You Can't Fix What You Can't Measure": Privately Measuring Demographic Performance Disparities in Federated Learning | Federated learning allows many devices to collaborate in the training of machine learning models. As in traditional machine learning, there is a growing concern that models trained with federated learning may exhibit disparate performance for different demographic groups. Existing solutions to measure and ensure equal model performance across groups require access to information about group membership, but this access is not always available or desirable, especially under the privacy aspirations of federated learning. We study the feasibility of measuring such performance disparities while protecting the privacy of the user's group membership and the federated model's performance on the user's data. Protecting both is essential for privacy, because they may be correlated, and thus learning one may reveal the other. On the other hand, from the utility perspective, the privacy-preserved data should maintain the correlation to ensure the ability to perform accurate measurements of the performance disparity. We achieve both of these goals by developing locally differentially private mechanisms that preserve the correlations between group membership and model performance. To analyze the effectiveness of the mechanisms, we bound their error in estimating the disparity when optimized for a given privacy budget, and validate these bounds on synthetic data. Our results show that the error rapidly decreases for realistic numbers of participating clients, demonstrating that, contrary to what prior work suggested, protecting the privacy of protected attributes is not necessarily in conflict with identifying disparities in the performance of federated models. | https://arxiv.org/abs/2206.12183v1 | https://arxiv.org/pdf/2206.12183v1.pdf | null | [
"Marc Juarez",
"Aleksandra Korolova"
] | [
"Federated Learning"
] | 1,656,028,800,000 | [] | 55,781 |
41,126 | https://paperswithcode.com/paper/show-and-tell-a-neural-image-caption | 1411.4555 | Show and Tell: A Neural Image Caption Generator | Automatically describing the content of an image is a fundamental problem in
artificial intelligence that connects computer vision and natural language
processing. In this paper, we present a generative model based on a deep
recurrent architecture that combines recent advances in computer vision and
machine translation and that can be used to generate natural sentences
describing an image. The model is trained to maximize the likelihood of the
target description sentence given the training image. Experiments on several
datasets show the accuracy of the model and the fluency of the language it
learns solely from image descriptions. Our model is often quite accurate, which
we verify both qualitatively and quantitatively. For instance, while the
current state-of-the-art BLEU-1 score (the higher the better) on the Pascal
dataset is 25, our approach yields 59, to be compared to human performance
around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66,
and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we
achieve a BLEU-4 of 27.7, which is the current state-of-the-art. | http://arxiv.org/abs/1411.4555v2 | http://arxiv.org/pdf/1411.4555v2.pdf | CVPR 2015 6 | [
"Oriol Vinyals",
"Alexander Toshev",
"Samy Bengio",
"Dumitru Erhan"
] | [
"Image Captioning",
"Image Retrieval with Multi-Modal Query",
"Text Generation",
"Text-to-Image Generation"
] | 1,416,182,400,000 | [] | 172,839 |
29,160 | https://paperswithcode.com/paper/dp-em-differentially-private-expectation | 1605.06995 | DP-EM: Differentially Private Expectation Maximization | The iterative nature of the expectation maximization (EM) algorithm presents
a challenge for privacy-preserving estimation, as each iteration increases the
amount of noise needed. We propose a practical private EM algorithm that
overcomes this challenge using two innovations: (1) a novel moment perturbation
formulation for differentially private EM (DP-EM), and (2) the use of two
recently developed composition methods to bound the privacy "cost" of multiple
EM iterations: the moments accountant (MA) and zero-mean concentrated
differential privacy (zCDP). Both MA and zCDP bound the moment generating
function of the privacy loss random variable and achieve a refined tail bound,
which effectively decrease the amount of additive noise. We present empirical
results showing the benefits of our approach, as well as similar performance
between these two composition methods in the DP-EM setting for Gaussian mixture
models. Our approach can be readily extended to many iterative learning
algorithms, opening up various exciting future directions. | http://arxiv.org/abs/1605.06995v2 | http://arxiv.org/pdf/1605.06995v2.pdf | null | [
"Mijung Park",
"Jimmy Foulds",
"Kamalika Chaudhuri",
"Max Welling"
] | [
"Privacy Preserving"
] | 1,463,961,600,000 | [] | 142 |
184,378 | https://paperswithcode.com/paper/joint-modeling-and-optimization-of-search-and | 1807.05631 | Joint Modeling and Optimization of Search and Recommendation | Despite the somewhat different techniques used in developing search engines
and recommender systems, they both follow the same goal: helping people to get
the information they need at the right time. Due to this common goal, search
and recommendation models can potentially benefit from each other. The recent
advances in neural network technologies make them effective and easily
extendable for various tasks, including retrieval and recommendation. This
raises the possibility of jointly modeling and optimizing search ranking and
recommendation algorithms, with potential benefits to both. In this paper, we
present theoretical and practical reasons to motivate joint modeling of search
and recommendation as a research direction. We propose a general framework that
simultaneously learns a retrieval model and a recommendation model by
optimizing a joint loss function. Our preliminary results on a dataset of
product data indicate that the proposed joint modeling substantially
outperforms the retrieval and recommendation models trained independently. We
list a number of future directions for this line of research that can
potentially lead to development of state-of-the-art search and recommendation
models. | http://arxiv.org/abs/1807.05631v1 | http://arxiv.org/pdf/1807.05631v1.pdf | null | [
"Zamani Hamed",
"Croft W. Bruce"
] | [
"Recommendation Systems"
] | 1,531,612,800,000 | [] | 177,185 |
162,502 | https://paperswithcode.com/paper/automatic-detection-of-microsleep-episodes | 2009.03027 | Automatic detection of microsleep episodes with deep learning | Brief fragments of sleep shorter than 15 s are defined as microsleep episodes (MSEs), often subjectively perceived as sleepiness. Their main characteristic is a slowing in frequency in the electroencephalogram (EEG), similar to stage N1 sleep according to standard criteria. The maintenance of wakefulness test (MWT) is often used in a clinical setting to assess vigilance. Scoring of the MWT in most sleep-wake centers is limited to classical definition of sleep (30-s epochs), and MSEs are mostly not considered in the absence of established scoring criteria defining MSEs but also because of the laborious work. We aimed for automatic detection of MSEs with machine learning, i.e. with deep learning based on raw EEG and EOG data as input. We analyzed MWT data of 76 patients. Experts visually scored wakefulness, and according to recently developed scoring criteria MSEs, microsleep episode candidates (MSEc), and episodes of drowsiness (ED). We implemented segmentation algorithms based on convolutional neural networks (CNNs) and a combination of a CNN with a long-short term memory (LSTM) network. A LSTM network is a type of a recurrent neural network which has a memory for past events and takes them into account. Data of 53 patients were used for training of the classifiers, 12 for validation and 11 for testing. Our algorithms showed a good performance close to human experts. The detection was very good for wakefulness and MSEs and poor for MSEc and ED, similar to the low inter-expert reliability for these borderline segments. We provide a proof of principle that it is feasible to reliably detect MSEs with deep neuronal networks based on raw EEG and EOG data with a performance close to that of human experts. Code of algorithms ( https://github.com/alexander-malafeev/microsleep-detection ) and data ( https://zenodo.org/record/3251716 ) are available. | https://arxiv.org/abs/2009.03027v2 | https://arxiv.org/pdf/2009.03027v2.pdf | null | [
"Alexander Malafeev",
"Anneke Hertig-Godeschalk",
"David R. Schreier",
"Jelena Skorucak",
"Johannes Mathis",
"Peter Achermann"
] | [
"EEG"
] | 1,599,436,800,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] | 179,972 |
303,207 | https://paperswithcode.com/paper/multi-frequency-joint-community-detection-and | 2206.12276 | Multi-Frequency Joint Community Detection and Phase Synchronization | This paper studies the joint community detection and phase synchronization problem on the \textit{stochastic block model with relative phase}, where each node is associated with a phase. This problem, with a variety of real-world applications, aims to recover community memberships and associated phases simultaneously. By studying the maximum likelihood estimation formulation, we show that this problem exhibits a \textit{``multi-frequency''} structure. To this end, two simple yet efficient algorithms that leverage information across multiple frequencies are proposed. The former is a spectral method based on the novel multi-frequency column-pivoted QR factorization, and the latter is an iterative multi-frequency generalized power method. Numerical experiments indicate our proposed algorithms outperform state-of-the-art algorithms, in recovering community memberships and associated phases. | https://arxiv.org/abs/2206.12276v1 | https://arxiv.org/pdf/2206.12276v1.pdf | null | [
"Lingda Wang",
"Zhizhen Zhao"
] | [
"Community Detection",
"Stochastic Block Model"
] | 1,655,337,600,000 | [] | 89,067 |
158,153 | https://paperswithcode.com/paper/detection-and-localization-of-robotic-tools | 2008.00936 | Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection | Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a Region Proposal Network (RPN), and a multi-modal two stream convolutional network for object detection, to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an Average Precision (AP) of 91% and a mean computation time of 0.1 seconds per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new dataset, ATLAS Dione, for RAS video understanding. Our dataset provides video data of ten surgeons from Roswell Park Cancer Institute (RPCI) (Buffalo, NY) performing six different surgical tasks on the daVinci Surgical System (dVSS R ) with annotations of robotic tools per frame. | https://arxiv.org/abs/2008.00936v1 | https://arxiv.org/pdf/2008.00936v1.pdf | null | [
"Duygu Sarikaya",
"Jason J. Corso",
"Khurshid A. Guru"
] | [
"Object Detection",
"Object Detection",
"Region Proposal",
"Video Understanding"
] | 1,595,980,800,000 | [
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
}
] | 156,922 |
309,925 | https://paperswithcode.com/paper/computational-complexity-reduction-of-deep | 2207.14620 | Computational complexity reduction of deep neural networks | Deep neural networks (DNN) have been widely used and play a major role in the field of computer vision and autonomous navigation. However, these DNNs are computationally complex and their deployment over resource-constrained platforms is difficult without additional optimizations and customization. In this manuscript, we describe an overview of DNN architecture and propose methods to reduce computational complexity in order to accelerate training and inference speeds to fit them on edge computing platforms with low computational resources. | https://arxiv.org/abs/2207.14620v1 | https://arxiv.org/pdf/2207.14620v1.pdf | null | [
"Mee Seong Im",
"Venkat R. Dasari"
] | [
"Autonomous Navigation",
"Edge-computing"
] | 1,659,052,800,000 | [] | 72,244 |
8,246 | https://paperswithcode.com/paper/multiple-human-parsing-in-the-wild | 1705.07206 | Multiple-Human Parsing in the Wild | Human parsing is attracting increasing research attention. In this work, we
aim to push the frontier of human parsing by introducing the problem of
multi-human parsing in the wild. Existing works on human parsing mainly tackle
single-person scenarios, which deviates from real-world applications where
multiple persons are present simultaneously with interaction and occlusion. To
address the multi-human parsing problem, we introduce a new multi-human parsing
(MHP) dataset and a novel multi-human parsing model named MH-Parser. The MHP
dataset contains multiple persons captured in real-world scenes with
pixel-level fine-grained semantic annotations in an instance-aware setting. The
MH-Parser generates global parsing maps and person instance masks
simultaneously in a bottom-up fashion with the help of a new Graph-GAN model.
We envision that the MHP dataset will serve as a valuable data resource to
develop new multi-human parsing models, and the MH-Parser offers a strong
baseline to drive future research for multi-human parsing in the wild. | http://arxiv.org/abs/1705.07206v2 | http://arxiv.org/pdf/1705.07206v2.pdf | null | [
"Jianshu Li",
"Jian Zhao",
"Yunchao Wei",
"Congyan Lang",
"Yidong Li",
"Terence Sim",
"Shuicheng Yan",
"Jiashi Feng"
] | [
"Human Parsing",
"Multi-Human Parsing"
] | 1,495,152,000,000 | [] | 120,982 |
101,788 | https://paperswithcode.com/paper/measuring-the-effects-of-confounders-in | 1905.08871 | Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI) | Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing. When tackling these tasks, one needs to bear in mind that biomedical data depends on a variety of characteristics, such as demographic aspects (age, gender, etc) or the acquisition technology, which might be unrelated with the target of the analysis. In supervised tasks, failing to match the ground truth targets with respect to such characteristics, called confounders, may lead to very misleading estimates of the predictive performance. Many strategies have been proposed to handle confounders, ranging from data selection, to normalization techniques, up to the use of training algorithm for learning with imbalanced data. However, all these solutions require the confounders to be known a priori. To this aim, we introduce a novel index that is able to measure the confounding effect of a data attribute in a bias-agnostic way. This index can be used to quantitatively compare the confounding effects of different variables and to inform correction methods such as normalization procedures or ad-hoc-prepared learning algorithms. The effectiveness of this index is validated on both simulated data and real-world neuroimaging data. | https://arxiv.org/abs/1905.08871v2 | https://arxiv.org/pdf/1905.08871v2.pdf | null | [
"Elisa Ferrari",
"Alessandra Retico",
"Davide Bacciu"
] | [
"Classification"
] | 1,558,396,800,000 | [] | 190,620 |
151,805 | https://paperswithcode.com/paper/sample-efficient-reinforcement-learning-of | 2006.12484 | Sample-Efficient Reinforcement Learning of Undercomplete POMDPs | Partial observability is a common challenge in many reinforcement learning applications, which requires an agent to maintain memory, infer latent states, and integrate this past information into exploration. This challenge leads to a number of computational and statistical hardness results for learning general Partially Observable Markov Decision Processes (POMDPs). This work shows that these hardness barriers do not preclude efficient reinforcement learning for rich and interesting subclasses of POMDPs. In particular, we present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works. OOM-UCB achieves an optimal sample complexity of $\tilde{\mathcal{O}}(1/\varepsilon^2)$ for finding an $\varepsilon$-optimal policy, along with being polynomial in all other relevant quantities. As an interesting special case, we also provide a computationally and statistically efficient algorithm for POMDPs with deterministic state transitions. | https://arxiv.org/abs/2006.12484v2 | https://arxiv.org/pdf/2006.12484v2.pdf | NeurIPS 2020 12 | [
"Chi Jin",
"Sham M. Kakade",
"Akshay Krishnamurthy",
"Qinghua Liu"
] | [
"reinforcement-learning"
] | 1,592,784,000,000 | [] | 6,828 |
153,255 | https://paperswithcode.com/paper/overview-of-gaussian-process-based-multi | 2006.16728 | Overview of Gaussian process based multi-fidelity techniques with variable relationship between fidelities | The design process of complex systems such as new configurations of aircraft or launch vehicles is usually decomposed in different phases which are characterized for instance by the depth of the analyses in terms of number of design variables and fidelity of the physical models. At each phase, the designers have to compose with accurate but computationally intensive models as well as cheap but inaccurate models. Multi-fidelity modeling is a way to merge different fidelity models to provide engineers with accurate results with a limited computational cost. Within the context of multi-fidelity modeling, approaches relying on Gaussian Processes emerge as popular techniques to fuse information between the different fidelity models. The relationship between the fidelity models is a key aspect in multi-fidelity modeling. This paper provides an overview of Gaussian process-based multi-fidelity modeling techniques for variable relationship between the fidelity models (e.g., linearity, non-linearity, variable correlation). Each technique is described within a unified framework and the links between the different techniques are highlighted. All the approaches are numerically compared on a series of analytical test cases and four aerospace related engineering problems in order to assess their benefits and disadvantages with respect to the problem characteristics. | https://arxiv.org/abs/2006.16728v1 | https://arxiv.org/pdf/2006.16728v1.pdf | null | [
"Loïc Brevault",
"Mathieu Balesdent",
"Ali Hebbal"
] | [
"Gaussian Processes"
] | 1,593,475,200,000 | [] | 51,556 |
211,074 | https://paperswithcode.com/paper/rlad-time-series-anomaly-detection-through | 2104.00543 | RLAD: Time Series Anomaly Detection through Reinforcement Learning and Active Learning | We introduce a new semi-supervised, time series anomaly detection algorithm that uses deep reinforcement learning (DRL) and active learning to efficiently learn and adapt to anomalies in real-world time series data. Our model - called RLAD - makes no assumption about the underlying mechanism that produces the observation sequence and continuously adapts the detection model based on experience with anomalous patterns. In addition, it requires no manual tuning of parameters and outperforms all state-of-art methods we compare with, both unsupervised and semi-supervised, across several figures of merit. More specifically, we outperform the best unsupervised approach by a factor of 1.58 on the F1 score, with only 1% of labels and up to around 4.4x on another real-world dataset with only 0.1% of labels. We compare RLAD with seven deep-learning based algorithms across two common anomaly detection datasets with up to around 3M data points and between 0.28% to 2.65% anomalies.We outperform all of them across several important performance metrics. | https://arxiv.org/abs/2104.00543v1 | https://arxiv.org/pdf/2104.00543v1.pdf | null | [
"Tong Wu",
"Jorge Ortiz"
] | [
"Active Learning",
"Anomaly Detection",
"reinforcement-learning",
"Time Series",
"Time Series Anomaly Detection"
] | 1,617,148,800,000 | [] | 48,046 |
110,070 | https://paperswithcode.com/paper/temporally-consistent-horizon-lines | 1907.10014 | Temporally Consistent Horizon Lines | The horizon line is an important geometric feature for many image processing and scene understanding tasks in computer vision. For instance, in navigation of autonomous vehicles or driver assistance, it can be used to improve 3D reconstruction as well as for semantic interpretation of dynamic environments. While both algorithms and datasets exist for single images, the problem of horizon line estimation from video sequences has not gained attention. In this paper, we show how convolutional neural networks are able to utilise the temporal consistency imposed by video sequences in order to increase the accuracy and reduce the variance of horizon line estimates. A novel CNN architecture with an improved residual convolutional LSTM is presented for temporally consistent horizon line estimation. We propose an adaptive loss function that ensures stable training as well as accurate results. Furthermore, we introduce an extension of the KITTI dataset which contains precise horizon line labels for 43699 images across 72 video sequences. A comprehensive evaluation shows that the proposed approach consistently achieves superior performance compared with existing methods. | https://arxiv.org/abs/1907.10014v2 | https://arxiv.org/pdf/1907.10014v2.pdf | null | [
"Florian Kluger",
"Hanno Ackermann",
"Michael Ying Yang",
"Bodo Rosenhahn"
] | [
"3D Reconstruction",
"Autonomous Vehicles",
"Horizon Line Estimation",
"Scene Understanding"
] | 1,563,840,000,000 | [
{
"code_snippet_url": null,
"description": "The Robust Loss is a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions. By introducing robustness as a continuous parameter, the loss function allows algorithms built around robust loss minimization to be generalized, which improves performance on basic vision tasks such as registration and clustering. Interpreting the loss as the negative log of a univariate density yields a general probability distribution that includes normal and Cauchy distributions as special cases. This probabilistic interpretation enables the training of neural networks in which the robustness of the loss automatically adapts itself during training, which improves performance on learning-based tasks such as generative image synthesis and unsupervised monocular depth estimation, without requiring any manual parameter tuning.",
"full_name": "Adaptive Robust Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Adaptive Loss",
"source_title": "A General and Adaptive Robust Loss Function",
"source_url": "http://arxiv.org/abs/1701.03077v10"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] | 50,273 |
102,963 | https://paperswithcode.com/paper/heterogeneous-causal-effects-with-imperfect | 1905.12707 | Heterogeneous causal effects with imperfect compliance: a Bayesian machine learning approach | This paper introduces an innovative Bayesian machine learning algorithm to draw interpretable inference on heterogeneous causal effects in the presence of imperfect compliance (e.g., under an irregular assignment mechanism). We show, through Monte Carlo simulations, that the proposed Bayesian Causal Forest with Instrumental Variable (BCF-IV) methodology outperforms other machine learning techniques tailored for causal inference in discovering and estimating the heterogeneous causal effects while controlling for the familywise error rate (or - less stringently - for the false discovery rate) at leaves' level. BCF-IV sheds a light on the heterogeneity of causal effects in instrumental variable scenarios and, in turn, provides the policy-makers with a relevant tool for targeted policies. Its empirical application evaluates the effects of additional funding on students' performances. The results indicate that BCF-IV could be used to enhance the effectiveness of school funding on students' performance. | https://arxiv.org/abs/1905.12707v4 | https://arxiv.org/pdf/1905.12707v4.pdf | null | [
"Falco J. Bargagli-Stoffi",
"Kristof De-Witte",
"Giorgio Gnecco"
] | [
"Causal Inference"
] | 1,559,088,000,000 | [
{
"code_snippet_url": null,
"description": "Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.",
"full_name": "Causal Inference",
"introduced_year": 2000,
"main_collection": null,
"name": "Causal Inference",
"source_title": null,
"source_url": null
}
] | 51,792 |
211,943 | https://paperswithcode.com/paper/few-shot-incremental-learning-with | 2104.03047 | Few-Shot Incremental Learning with Continually Evolved Classifiers | Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points, without forgetting knowledge of old classes. The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems. Moreover, as training data come in sequence in FSCIL, the learned classifier can only provide discriminative information in individual sessions, while FSCIL requires all classes to be involved for evaluation. In this paper, we address the FSCIL problem from two aspects. First, we adopt a simple but effective decoupled learning strategy of representations and classifiers that only the classifiers are updated in each incremental session, which avoids knowledge forgetting in the representations. By doing so, we demonstrate that a pre-trained backbone plus a non-parametric class mean classifier can beat state-of-the-art methods. Second, to make the classifiers learned on individual sessions applicable to all classes, we propose a Continually Evolved Classifier (CEC) that employs a graph model to propagate context information between classifiers for adaptation. To enable the learning of CEC, we design a pseudo incremental learning paradigm that episodically constructs a pseudo incremental learning task to optimize the graph parameters by sampling data from the base dataset. Experiments on three popular benchmark datasets, including CIFAR100, miniImageNet, and Caltech-USCD Birds-200-2011 (CUB200), show that our method significantly outperforms the baselines and sets new state-of-the-art results with remarkable advantages. | https://arxiv.org/abs/2104.03047v1 | https://arxiv.org/pdf/2104.03047v1.pdf | CVPR 2021 1 | [
"Chi Zhang",
"Nan Song",
"Guosheng Lin",
"Yun Zheng",
"Pan Pan",
"Yinghui Xu"
] | [
"class-incremental learning",
"Incremental Learning"
] | 1,617,753,600,000 | [] | 33,770 |
196,050 | https://paperswithcode.com/paper/signed-graph-diffusion-network-1 | 2012.14191 | Signed Graph Diffusion Network | Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges? Signed social graphs have received considerable attention to model trust relationships. Learning node representations is crucial to effectively analyze graph data, and various techniques such as network embedding and graph convolutional network (GCN) have been proposed for learning signed graphs. However, traditional network embedding methods are not end-to-end for a specific task such as link sign prediction, and GCN-based methods suffer from a performance degradation problem when their depth increases. In this paper, we propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs. We propose a random walk technique specially designed for signed graphs so that SGDNet effectively diffuses hidden node features. Through extensive experiments, we demonstrate that SGDNet outperforms state-of-the-art models in terms of link sign prediction accuracy. | https://arxiv.org/abs/2012.14191v1 | https://arxiv.org/pdf/2012.14191v1.pdf | null | [
"Jinhong Jung",
"Jaemin Yoo",
"U Kang"
] | [
"Link Sign Prediction",
"Network Embedding",
"Representation Learning"
] | 1,609,113,600,000 | [
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] | 112,217 |
138,389 | https://paperswithcode.com/paper/online-and-real-time-object-tracking | 2003.12091 | Online and Real-time Object Tracking Algorithm with Extremely Small Matrices | Online and Real-time Object Tracking is an interesting workload that can be used to track objects (e.g., car, human, animal) in a series of video sequences in real-time. For simple object tracking on edge devices, the output of object tracking could be as simple as drawing a bounding box around a detected object and in some cases, the input matrices used in such computation are quite small (e.g., 4x7, 3x3, 5x5, etc). As a result, the amount of actual work is low. Therefore, a typical multi-threading based parallelization technique can not accelerate the tracking application; instead, a throughput based parallelization technique where each thread operates on independent video sequences is more rewarding. In this paper, we share our experience in parallelizing a Simple Online and Real-time Tracking (SORT) application on shared-memory multicores. | https://arxiv.org/abs/2003.12091v2 | https://arxiv.org/pdf/2003.12091v2.pdf | null | [
"Jesmin Jahan Tithi",
"Sriram Aananthakrishnan",
"Fabrizio Petrini"
] | [
"Object Tracking"
] | 1,585,180,800,000 | [] | 36,410 |
127,585 | https://paperswithcode.com/paper/predicting-detection-filters-for-small | 1912.07575 | Predicting detection filters for small footprint open-vocabulary keyword spotting | In this paper, we propose a fully-neural approach to open-vocabulary keyword spotting, that allows the users to include a customizable voice interface to their device and that does not require task-specific data. We present a keyword detection neural network weighing less than 250KB, in which the topmost layer performing keyword detection is predicted by an auxiliary network, that may be run offline to generate a detector for any keyword. We show that the proposed model outperforms acoustic keyword spotting baselines by a large margin on two tasks of detecting keywords in utterances and three tasks of detecting isolated speech commands. We also propose a method to fine-tune the model when specific training data is available for some keywords, which yields a performance similar to a standard speech command neural network while keeping the ability of the model to be applied to new keywords. | https://arxiv.org/abs/1912.07575v2 | https://arxiv.org/pdf/1912.07575v2.pdf | null | [
"Theodore Bluche",
"Thibault Gisselbrecht"
] | [
"Keyword Spotting"
] | 1,576,454,400,000 | [] | 61,378 |
298,170 | https://paperswithcode.com/paper/ultrahyperbolic-knowledge-graph-embeddings | 2206.00449 | Ultrahyperbolic Knowledge Graph Embeddings | Recent knowledge graph (KG) embeddings have been advanced by hyperbolic geometry due to its superior capability for representing hierarchies. The topological structures of real-world KGs, however, are rather heterogeneous, i.e., a KG is composed of multiple distinct hierarchies and non-hierarchical graph structures. Therefore, a homogeneous (either Euclidean or hyperbolic) geometry is not sufficient for fairly representing such heterogeneous structures. To capture the topological heterogeneity of KGs, we present an ultrahyperbolic KG embedding (UltraE) in an ultrahyperbolic (or pseudo-Riemannian) manifold that seamlessly interleaves hyperbolic and spherical manifolds. In particular, we model each relation as a pseudo-orthogonal transformation that preserves the pseudo-Riemannian bilinear form. The pseudo-orthogonal transformation is decomposed into various operators (i.e., circular rotations, reflections and hyperbolic rotations), allowing for simultaneously modeling heterogeneous structures as well as complex relational patterns. Experimental results on three standard KGs show that UltraE outperforms previous Euclidean- and hyperbolic-based approaches. | https://arxiv.org/abs/2206.00449v1 | https://arxiv.org/pdf/2206.00449v1.pdf | null | [
"Bo Xiong",
"Shichao Zhu",
"Mojtaba Nayyeri",
"Chengjin Xu",
"Shirui Pan",
"Chuan Zhou",
"Steffen Staab"
] | [
"Knowledge Graph Embeddings"
] | 1,654,041,600,000 | [] | 176,195 |
127,940 | https://paperswithcode.com/paper/unsupervised-change-detection-in-multi | 1912.08628 | Unsupervised Change Detection in Multi-temporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network | With the development of Earth observation technology, very-high-resolution (VHR) image has become an important data source of change detection. Nowadays, deep learning methods have achieved conspicuous performance in the change detection of VHR images. Nonetheless, most of the existing change detection models based on deep learning require annotated training samples. In this paper, a novel unsupervised model called kernel principal component analysis (KPCA) convolution is proposed for extracting representative features from multi-temporal VHR images. Based on the KPCA convolution, an unsupervised deep siamese KPCA convolutional mapping network (KPCA-MNet) is designed for binary and multi-class change detection. In the KPCA-MNet, the high-level spatial-spectral feature maps are extracted by a deep siamese network consisting of weight-shared PCA convolution layers. Then, the change information in the feature difference map is mapped into a 2-D polar domain. Finally, the change detection results are generated by threshold segmentation and clustering algorithms. All procedures of KPCA-MNet does not require labeled data. The theoretical analysis and experimental results demonstrate the validity, robustness, and potential of the proposed method in two binary change detection data sets and one multi-class change detection data set. | https://arxiv.org/abs/1912.08628v1 | https://arxiv.org/pdf/1912.08628v1.pdf | null | [
"Chen Wu",
"Hongruixuan Chen",
"Bo Do",
"Liangpei Zhang"
] | [
"Change Detection"
] | 1,576,627,200,000 | [
{
"code_snippet_url": null,
"description": "A **Siamese Network** consists of twin networks which accept distinct inputs but are joined by an energy function at the top. This function computes a metric between the highest level feature representation on each side. The parameters between the twin networks are tied. [Weight tying](https://paperswithcode.com/method/weight-tying) guarantees that two extremely similar images are not mapped by each network to very different locations in feature space because each network computes the same function. The network is symmetric, so that whenever we present two distinct images to the twin networks, the top conjoining layer will compute the same metric as if we were to we present the same two images but to the opposite twins.\r\n\r\nIntuitively instead of trying to classify inputs, a siamese network learns to differentiate between inputs, learning their similarity. The loss function used is usually a form of contrastive loss.\r\n\r\nSource: [Koch et al](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf)",
"full_name": "Siamese Network",
"introduced_year": 1993,
"main_collection": {
"area": "General",
"description": "**Twin Networks** are a type of neural network architecture where we use two of the same network architecture to perform a task. For example, Siamese Networks are used to learn representations that differentiate between inputs (learning their similarity). Below you can find a continuously updating list of twin network architectures.",
"name": "Twin Networks",
"parent": null
},
"name": "Siamese Network",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] | 162,015 |
252,112 | https://paperswithcode.com/paper/calibrating-explore-exploit-trade-off-for | 2111.00735 | Calibrating Explore-Exploit Trade-off for Fair Online Learning to Rank | Online learning to rank (OL2R) has attracted great research interests in recent years, thanks to its advantages in avoiding expensive relevance labeling as required in offline supervised ranking model learning. Such a solution explores the unknowns (e.g., intentionally present selected results on top positions) to improve its relevance estimation. This however triggers concerns on its ranking fairness: different groups of items might receive differential treatments during the course of OL2R. But existing fair ranking solutions usually require the knowledge of result relevance or a performing ranker beforehand, which contradicts with the setting of OL2R and thus cannot be directly applied to guarantee fairness. In this work, we propose a general framework to achieve fairness defined by group exposure in OL2R. The key idea is to calibrate exploration and exploitation for fairness control, relevance learning and online ranking quality. In particular, when the model is exploring a set of results for relevance feedback, we confine the exploration within a subset of random permutations, where fairness across groups is maintained while the feedback is still unbiased. Theoretically we prove such a strategy introduces minimum distortion in OL2R's regret to obtain fairness. Extensive empirical analysis is performed on two public learning to rank benchmark datasets to demonstrate the effectiveness of the proposed solution compared to existing fair OL2R solutions. | https://arxiv.org/abs/2111.00735v1 | https://arxiv.org/pdf/2111.00735v1.pdf | null | [
"Yiling Jia",
"Hongning Wang"
] | [
"Fairness",
"Learning-To-Rank",
"online learning"
] | 1,635,724,800,000 | [] | 122,609 |
302,424 | https://paperswithcode.com/paper/model-agnostic-few-shot-open-set-recognition | 2206.09236 | Model-Agnostic Few-Shot Open-Set Recognition | We tackle the Few-Shot Open-Set Recognition (FSOSR) problem, i.e. classifying instances among a set of classes for which we only have few labeled samples, while simultaneously detecting instances that do not belong to any known class. Departing from existing literature, we focus on developing model-agnostic inference methods that can be plugged into any existing model, regardless of its architecture or its training procedure. Through evaluating the embedding's quality of a variety of models, we quantify the intrinsic difficulty of model-agnostic FSOSR. Furthermore, a fair empirical evaluation suggests that the naive combination of a kNN detector and a prototypical classifier ranks before specialized or complex methods in the inductive setting of FSOSR. These observations motivated us to resort to transduction, as a popular and practical relaxation of standard few-shot learning problems. We introduce an Open Set Transductive Information Maximization method OSTIM, which hallucinates an outlier prototype while maximizing the mutual information between extracted features and assignments. Through extensive experiments spanning 5 datasets, we show that OSTIM surpasses both inductive and existing transductive methods in detecting open-set instances while competing with the strongest transductive methods in classifying closed-set instances. We further show that OSTIM's model agnosticity allows it to successfully leverage the strong expressive abilities of the latest architectures and training strategies without any hyperparameter modification, a promising sign that architectural advances to come will continue to positively impact OSTIM's performances. | https://arxiv.org/abs/2206.09236v1 | https://arxiv.org/pdf/2206.09236v1.pdf | null | [
"Malik Boudiaf",
"Etienne Bennequin",
"Myriam Tami",
"Celine Hudelot",
"Antoine Toubhans",
"Pablo Piantanida",
"Ismail Ben Ayed"
] | [
"Few-Shot Learning",
"Open Set Learning"
] | 1,655,510,400,000 | [] | 196,926 |
68,680 | https://paperswithcode.com/paper/bccwj-deppara-a-syntactic-annotation-treebank | null | BCCWJ-DepPara: A Syntactic Annotation Treebank on the `Balanced Corpus of Contemporary Written Japanese' | Paratactic syntactic structures are difficult to represent in syntactic dependency tree structures. As such, we propose an annotation schema for syntactic dependency annotation of Japanese, in which coordinate structures are split from and overlaid on bunsetsu-based (base phrase unit) dependency. The schema represents nested coordinate structures, non-constituent conjuncts, and forward sharing as the set of regions. The annotation was performed on the core data of {`}Balanced Corpus of Contemporary Written Japanese{'}, which comprised about one million words and 1980 samples from six registers, such as newspapers, books, magazines, and web texts. | https://aclanthology.org/W16-5406 | https://aclanthology.org/W16-5406.pdf | WS 2016 12 | [
"Masayuki Asahara",
"Yuji Matsumoto"
] | [
"Dependency Parsing"
] | 1,480,550,400,000 | [] | 28,280 |
71,235 | https://paperswithcode.com/paper/bidirectional-recurrent-convolutional | null | Bidirectional Recurrent Convolutional Networks for Multi-Frame Super-Resolution | Super resolving a low-resolution video is usually handled by either single-image super-resolution (SR) or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video super-resolution. Multi-Frame SR generally extracts motion information, e.g. optical flow, to model the temporal dependency, which often shows high computational cost. Considering that recurrent neural network (RNN) can model long-term contextual information of temporal sequences well, we propose a bidirectional recurrent convolutional network for efficient multi-frame SR.Different from vanilla RNN, 1) the commonly-used recurrent full connections are replaced with weight-sharing convolutional connections and 2) conditional convolutional connections from previous input layers to current hidden layer are added for enhancing visual-temporal dependency modelling. With the powerful temporal dependency modelling, our model can super resolve videos with complex motions and achieve state-of-the-art performance. Due to the cheap convolution operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame methods. | http://papers.nips.cc/paper/5778-bidirectional-recurrent-convolutional-networks-for-multi-frame-super-resolution | http://papers.nips.cc/paper/5778-bidirectional-recurrent-convolutional-networks-for-multi-frame-super-resolution.pdf | NeurIPS 2015 12 | [
"Yan Huang",
"Wei Wang",
"Liang Wang"
] | [
"Image Super-Resolution",
"Multi-Frame Super-Resolution",
"Optical Flow Estimation",
"Single Image Super Resolution",
"Super-Resolution",
"Temporal Sequences",
"Video Super-Resolution"
] | 1,448,928,000,000 | [
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] | 133,980 |
124,969 | https://paperswithcode.com/paper/estimating-uncertainty-of-earthquake-rupture | 1911.09660 | Estimating uncertainty of earthquake rupture using Bayesian neural network | Bayesian neural networks (BNN) are the probabilistic model that combines the strengths of both neural network (NN) and stochastic processes. As a result, BNN can combat overfitting and perform well in applications where data is limited. Earthquake rupture study is such a problem where data is insufficient, and scientists have to rely on many trial and error numerical or physical models. Lack of resources and computational expenses, often, it becomes hard to determine the reasons behind the earthquake rupture. In this work, a BNN has been used (1) to combat the small data problem and (2) to find out the parameter combinations responsible for earthquake rupture and (3) to estimate the uncertainty associated with earthquake rupture. Two thousand rupture simulations are used to train and test the model. A simple 2D rupture geometry is considered where the fault has a Gaussian geometric heterogeneity at the center, and eight parameters vary in each simulation. The test F1-score of BNN (0.8334), which is 2.34% higher than plain NN score. Results show that the parameters of rupture propagation have higher uncertainty than the rupture arrest. Normal stresses play a vital role in determining rupture propagation and are also the highest source of uncertainty, followed by the dynamic friction coefficient. Shear stress has a moderate role, whereas the geometric features such as the width and height of the fault are least significant and uncertain. | https://arxiv.org/abs/1911.09660v1 | https://arxiv.org/pdf/1911.09660v1.pdf | null | [
"Sabber Ahamed"
] | [
"Small Data Image Classification"
] | 1,574,294,400,000 | [] | 53,091 |
41,409 | https://paperswithcode.com/paper/on-model-misspecification-and-kl-separation | 1501.02320 | On model misspecification and KL separation for Gaussian graphical models | We establish bounds on the KL divergence between two multivariate Gaussian
distributions in terms of the Hamming distance between the edge sets of the
corresponding graphical models. We show that the KL divergence is bounded below
by a constant when the graphs differ by at least one edge; this is essentially
the tightest possible bound, since classes of graphs exist for which the edge
discrepancy increases but the KL divergence remains bounded above by a
constant. As a natural corollary to our KL lower bound, we also establish a
sample size requirement for correct model selection via maximum likelihood
estimation. Our results rigorize the notion that it is essential to estimate
the edge structure of a Gaussian graphical model accurately in order to
approximate the true distribution to close precision. | http://arxiv.org/abs/1501.02320v2 | http://arxiv.org/pdf/1501.02320v2.pdf | null | [
"Varun Jog",
"Po-Ling Loh"
] | [
"Model Selection"
] | 1,420,848,000,000 | [] | 57,078 |
291,769 | https://paperswithcode.com/paper/cosplay-concept-set-guided-personalized | 2205.00872 | COSPLAY: Concept Set Guided Personalized Dialogue Generation Across Both Party Personas | Maintaining a consistent persona is essential for building a human-like conversational model. However, the lack of attention to the partner makes the model more egocentric: they tend to show their persona by all means such as twisting the topic stiffly, pulling the conversation to their own interests regardless, and rambling their persona with little curiosity to the partner. In this work, we propose COSPLAY(COncept Set guided PersonaLized dialogue generation Across both partY personas) that considers both parties as a "team": expressing self-persona while keeping curiosity toward the partner, leading responses around mutual personas, and finding the common ground. Specifically, we first represent self-persona, partner persona and mutual dialogue all in the concept sets. Then, we propose the Concept Set framework with a suite of knowledge-enhanced operations to process them such as set algebras, set expansion, and set distance. Based on these operations as medium, we train the model by utilizing 1) concepts of both party personas, 2) concept relationship between them, and 3) their relationship to the future dialogue. Extensive experiments on a large public dataset, Persona-Chat, demonstrate that our model outperforms state-of-the-art baselines for generating less egocentric, more human-like, and higher quality responses in both automatic and human evaluations. | https://arxiv.org/abs/2205.00872v3 | https://arxiv.org/pdf/2205.00872v3.pdf | null | [
"Chen Xu",
"Piji Li",
"Wei Wang",
"Haoran Yang",
"Siyun Wang",
"Chuangbai Xiao"
] | [
"Dialogue Generation"
] | 1,651,449,600,000 | [] | 144,337 |
287,115 | https://paperswithcode.com/paper/textit-latent-glat-glancing-at-latent | 2204.02030 | $\textit{latent}$-GLAT: Glancing at Latent Variables for Parallel Text Generation | Recently, parallel text generation has received widespread attention due to its success in generation efficiency. Although many advanced techniques are proposed to improve its generation quality, they still need the help of an autoregressive model for training to overcome the one-to-many multi-modal phenomenon in the dataset, limiting their applications. In this paper, we propose $\textit{latent}$-GLAT, which employs the discrete latent variables to capture word categorical information and invoke an advanced curriculum learning technique, alleviating the multi-modality problem. Experiment results show that our method outperforms strong baselines without the help of an autoregressive model, which further broadens the application scenarios of the parallel decoding paradigm. | https://arxiv.org/abs/2204.02030v1 | https://arxiv.org/pdf/2204.02030v1.pdf | null | [
"Yu Bao",
"Hao Zhou",
"ShuJian Huang",
"Dongqi Wang",
"Lihua Qian",
"Xinyu Dai",
"Jiajun Chen",
"Lei LI"
] | [
"Text Generation"
] | 1,649,116,800,000 | [] | 151,679 |
267,301 | https://paperswithcode.com/paper/mutual-adversarial-training-learning-together | 2112.05005 | Mutual Adversarial Training: Learning together is better than going alone | Recent studies have shown that robustness to adversarial attacks can be transferred across networks. In other words, we can make a weak model more robust with the help of a strong teacher model. We ask if instead of learning from a static teacher, can models "learn together" and "teach each other" to achieve better robustness? In this paper, we study how interactions among models affect robustness via knowledge distillation. We propose mutual adversarial training (MAT), in which multiple models are trained together and share the knowledge of adversarial examples to achieve improved robustness. MAT allows robust models to explore a larger space of adversarial samples, and find more robust feature spaces and decision boundaries. Through extensive experiments on CIFAR-10 and CIFAR-100, we demonstrate that MAT can effectively improve model robustness and outperform state-of-the-art methods under white-box attacks, bringing $\sim$8% accuracy gain to vanilla adversarial training (AT) under PGD-100 attacks. In addition, we show that MAT can also mitigate the robustness trade-off among different perturbation types, bringing as much as 13.1% accuracy gain to AT baselines against the union of $l_\infty$, $l_2$ and $l_1$ attacks. These results show the superiority of the proposed method and demonstrate that collaborative learning is an effective strategy for designing robust models. | https://arxiv.org/abs/2112.05005v1 | https://arxiv.org/pdf/2112.05005v1.pdf | null | [
"Jiang Liu",
"Chun Pong Lau",
"Hossein Souri",
"Soheil Feizi",
"Rama Chellappa"
] | [
"Knowledge Distillation"
] | 1,639,008,000,000 | [] | 108,516 |
276,903 | https://paperswithcode.com/paper/learning-to-bootstrap-for-combating-label | 2202.04291 | Learning to Bootstrap for Combating Label Noise | Deep neural networks are powerful tools for representation learning, but can easily overfit to noisy labels which are prevalent in many real-world scenarios. Generally, noisy supervision could stem from variation among labelers, label corruption by adversaries, etc. To combat such label noises, one popular line of approach is to apply customized weights to the training instances, so that the corrupted examples contribute less to the model learning. However, such learning mechanisms potentially erase important information about the data distribution and therefore yield suboptimal results. To leverage useful information from the corrupted instances, an alternative is the bootstrapping loss, which reconstructs new training targets on-the-fly by incorporating the network's own predictions (i.e., pseudo-labels). In this paper, we propose a more generic learnable loss objective which enables a joint reweighting of instances and labels at once. Specifically, our method dynamically adjusts the per-sample importance weight between the real observed labels and pseudo-labels, where the weights are efficiently determined in a meta process. Compared to the previous instance reweighting methods, our approach concurrently conducts implicit relabeling, and thereby yield substantial improvements with almost no extra cost. Extensive experimental results demonstrated the strengths of our approach over existing methods on multiple natural and medical image benchmark datasets, including CIFAR-10, CIFAR-100, ISIC2019 and Clothing 1M. The code is publicly available at https://github.com/yuyinzhou/L2B. | https://arxiv.org/abs/2202.04291v1 | https://arxiv.org/pdf/2202.04291v1.pdf | null | [
"Yuyin Zhou",
"Xianhang Li",
"Fengze Liu",
"Xuxi Chen",
"Lequan Yu",
"Cihang Xie",
"Matthew P. Lungren",
"Lei Xing"
] | [
"Image Classification",
"Representation Learning"
] | 1,644,364,800,000 | [] | 171,518 |
173,019 | https://paperswithcode.com/paper/towards-personalized-explanation-of-robotic | 2011.00524 | Towards Personalized Explanation of Robot Path Planning via User Feedback | Prior studies have found that explaining robot decisions and actions helps to increase system transparency, improve user understanding, and enable effective human-robot collaboration. In this paper, we present a system for generating personalized explanations of robot path planning via user feedback. We consider a robot navigating in an environment modeled as a Markov decision process (MDP), and develop an algorithm to automatically generate a personalized explanation of an optimal MDP policy, based on the user preference regarding four elements (i.e., objective, locality, specificity, and corpus). In addition, we design the system to interact with users via answering users' further questions about the generated explanations. Users have the option to update their preferences to view different explanations. The system is capable of detecting and resolving any preference conflict via user interaction. The results of an online user study show that the generated personalized explanations improve user satisfaction, while the majority of users liked the system's capabilities of question-answering and conflict detection/resolution. | https://arxiv.org/abs/2011.00524v2 | https://arxiv.org/pdf/2011.00524v2.pdf | null | [
"Kayla Boggess",
"Shenghui Chen",
"Lu Feng"
] | [
"Question Answering"
] | 1,604,188,800,000 | [] | 90,951 |
172,815 | https://paperswithcode.com/paper/efficient-arabic-emotion-recognition-using | 2011.00346 | Efficient Arabic emotion recognition using deep neural networks | Emotion recognition from speech signal based on deep learning is an active research area. Convolutional neural networks (CNNs) may be the dominant method in this area. In this paper, we implement two neural architectures to address this problem. The first architecture is an attention-based CNN-LSTM-DNN model. In this novel architecture, the convolutional layers extract salient features and the bi-directional long short-term memory (BLSTM) layers handle the sequential phenomena of the speech signal. This is followed by an attention layer, which extracts a summary vector that is fed to the fully connected dense layer (DNN), which finally connects to a softmax output layer. The second architecture is based on a deep CNN model. The results on an Arabic speech emotion recognition task show that our innovative approach can lead to significant improvements (2.2% absolute improvements) over a strong deep CNN baseline system. On the other hand, the deep CNN models are significantly faster than the attention based CNN-LSTM-DNN models in training and classification. | https://arxiv.org/abs/2011.00346v1 | https://arxiv.org/pdf/2011.00346v1.pdf | null | [
"Ahmed Ali",
"Yasser Hifny"
] | [
"Emotion Recognition",
"Speech Emotion Recognition"
] | 1,604,102,400,000 | [
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] | 4,394 |
114,943 | https://paperswithcode.com/paper/semantic-aware-scene-recognition | 1909.02410 | Semantic-Aware Scene Recognition | Scene recognition is currently one of the top-challenging research fields in computer vision. This may be due to the ambiguity between classes: images of several scene classes may share similar objects, which causes confusion among them. The problem is aggravated when images of a particular scene class are notably different. Convolutional Neural Networks (CNNs) have significantly boosted performance in scene recognition, albeit it is still far below from other recognition tasks (e.g., object or image recognition). In this paper, we describe a novel approach for scene recognition based on an end-to-end multi-modal CNN that combines image and context information by means of an attention module. Context information, in the shape of semantic segmentation, is used to gate features extracted from the RGB image by leveraging on information encoded in the semantic representation: the set of scene objects and stuff, and their relative locations. This gating process reinforces the learning of indicative scene content and enhances scene disambiguation by refocusing the receptive fields of the CNN towards them. Experimental results on four publicly available datasets show that the proposed approach outperforms every other state-of-the-art method while significantly reducing the number of network parameters. All the code and data used along this paper is available at https://github.com/vpulab/Semantic-Aware-Scene-Recognition | https://arxiv.org/abs/1909.02410v3 | https://arxiv.org/pdf/1909.02410v3.pdf | null | [
"Alejandro López-Cifuentes",
"Marcos Escudero-Viñolo",
"Jesús Bescós",
"Álvaro García-Martín"
] | [
"Scene Classification",
"Scene Recognition",
"Semantic Segmentation"
] | 1,567,641,600,000 | [] | 118,055 |
20,324 | https://paperswithcode.com/paper/modified-alpha-rooting-color-image | 1707.04781 | Modified Alpha-Rooting Color Image Enhancement Method On The Two-Side 2-D Quaternion Discrete Fourier Transform And The 2-D Discrete Fourier Transform | Color in an image is resolved into 3 or 4 color components and 2-Dimages of
these components are stored in separate channels. Most of the color image
enhancement algorithms are applied channel-by-channel on each image. But such a
system of color image processing is not processing the original color. When a
color image is represented as a quaternion image, processing is done in
original colors. This paper proposes an implementation of the quaternion
approach of enhancement algorithm for enhancing color images and is referred as
the modified alpha-rooting by the two-dimensional quaternion discrete Fourier
transform (2-D QDFT). Enhancement results of this proposed method are compared
with the channel-by-channel image enhancement by the 2-D DFT. Enhancements in
color images are quantitatively measured by the color enhancement measure
estimation (CEME), which allows for selecting optimum parameters for processing
by the genetic algorithm. Enhancement of color images by the quaternion based
method allows for obtaining images which are closer to the genuine
representation of the real original color. | http://arxiv.org/abs/1707.04781v1 | http://arxiv.org/pdf/1707.04781v1.pdf | null | [
"Artyom M. Grigoryan",
"Aparna John",
"Sos S. Agaian"
] | [
"Image Enhancement"
] | 1,500,076,800,000 | [] | 130,655 |
264,582 | https://paperswithcode.com/paper/turkish-named-entity-recognition-a-survey-and | null | Turkish Named Entity Recognition: A Survey and Comparative Analysis | Named entity recognition is a challenging task that has been widely studied in English. Although there are some efforts for named entity recognition in Turkish language, the reported results are limited to particular datasets and models. Moreover, there is a lack of comparative analysis for named entity recognition in Turkish. In this study, we contribute to the literature in three folds. First, we provide an up-to-date short survey on Turkish named entity recognition studies. Second, we compare state-of-the-art named entity recognition models on various Turkish datasets that we can access to. Lastly, we analyze a set of linguistic processing steps that would affect the performance of Turkish named entity recognition. | https://openreview.net/forum?id=xCcOlhiNyu | https://openreview.net/pdf?id=xCcOlhiNyu | ACL ARR October 2021 10 | [
"Anonymous"
] | [
"Named Entity Recognition",
"Named Entity Recognition"
] | 1,634,342,400,000 | [] | 105,530 |
142,818 | https://paperswithcode.com/paper/method-for-customizable-automated-tagging | 2005.00042 | Method for Customizable Automated Tagging: Addressing the Problem of Over-tagging and Under-tagging Text Documents | Using author provided tags to predict tags for a new document often results in the overgeneration of tags. In the case where the author doesn't provide any tags, our documents face the severe under-tagging issue. In this paper, we present a method to generate a universal set of tags that can be applied widely to a large document corpus. Using IBM Watson's NLU service, first, we collect keywords/phrases that we call "complex document tags" from 8,854 popular reports in the corpus. We apply LDA model over these complex document tags to generate a set of 765 unique "simple tags". In applying the tags to a corpus of documents, we run each document through the IBM Watson NLU and apply appropriate simple tags. Using only 765 simple tags, our method allows us to tag 87,397 out of 88,583 total documents in the corpus with at least one tag. About 92.1% of the total 87,397 documents are also determined to be sufficiently-tagged. In the end, we discuss the performance of our method and its limitations. | https://arxiv.org/abs/2005.00042v1 | https://arxiv.org/pdf/2005.00042v1.pdf | null | [
"Maharshi R. Pandya",
"Jessica Reyes",
"Bob Vanderheyden"
] | [
"TAG"
] | 1,588,204,800,000 | [
{
"code_snippet_url": null,
"description": "**Linear discriminant analysis** (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.\r\n\r\nExtracted from [Wikipedia](https://en.wikipedia.org/wiki/Linear_discriminant_analysis)\r\n\r\n**Source**:\r\n\r\nPaper: [Linear Discriminant Analysis: A Detailed Tutorial](https://dx.doi.org/10.3233/AIC-170729)\r\n\r\nPublic version: [Linear Discriminant Analysis: A Detailed Tutorial](https://usir.salford.ac.uk/id/eprint/52074/)",
"full_name": "Linear Discriminant Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "LDA",
"source_title": null,
"source_url": null
}
] | 121,848 |
274,624 | https://paperswithcode.com/paper/decepticons-corrupted-transformers-breach | 2201.12675 | Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models | A central tenet of Federated learning (FL), which trains models without centralizing user data, is privacy. However, previous work has shown that the gradient updates used in FL can leak user information. While the most industrial uses of FL are for text applications (e.g. keystroke prediction), nearly all attacks on FL privacy have focused on simple image classifiers. We propose a novel attack that reveals private user text by deploying malicious parameter vectors, and which succeeds even with mini-batches, multiple users, and long sequences. Unlike previous attacks on FL, the attack exploits characteristics of both the Transformer architecture and the token embedding, separately extracting tokens and positional embeddings to retrieve high-fidelity text. This work suggests that FL on text, which has historically been resistant to privacy attacks, is far more vulnerable than previously thought. | https://arxiv.org/abs/2201.12675v1 | https://arxiv.org/pdf/2201.12675v1.pdf | null | [
"Liam Fowl",
"Jonas Geiping",
"Steven Reich",
"Yuxin Wen",
"Wojtek Czaja",
"Micah Goldblum",
"Tom Goldstein"
] | [
"Federated Learning"
] | 1,643,414,400,000 | [
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
}
] | 71,596 |
288,526 | https://paperswithcode.com/paper/comparison-analysis-of-traditional-machine | 2204.05983 | Comparison Analysis of Traditional Machine Learning and Deep Learning Techniques for Data and Image Classification | The purpose of the study is to analyse and compare the most common machine learning and deep learning techniques used for computer vision 2D object classification tasks. Firstly, we will present the theoretical background of the Bag of Visual words model and Deep Convolutional Neural Networks (DCNN). Secondly, we will implement a Bag of Visual Words model, the VGG16 CNN Architecture. Thirdly, we will present our custom and novice DCNN in which we test the aforementioned implementations on a modified version of the Belgium Traffic Sign dataset. Our results showcase the effects of hyperparameters on traditional machine learning and the advantage in terms of accuracy of DCNNs compared to classical machine learning methods. As our tests indicate, our proposed solution can achieve similar - and in some cases better - results than existing DCNNs architectures. Finally, the technical merit of this article lies in the presented computationally simpler DCNN architecture, which we believe can pave the way towards using more efficient architectures for basic tasks. | https://arxiv.org/abs/2204.05983v1 | https://arxiv.org/pdf/2204.05983v1.pdf | null | [
"Efstathios Karypidis",
"Stylianos G. Mouslech",
"Kassiani Skoulariki",
"Alexandros Gazis"
] | [
"Image Classification"
] | 1,649,635,200,000 | [
{
"code_snippet_url": "",
"description": "Diffusion-convolutional neural networks (DCNN) is a model for graph-structured data. Through the introduction of a diffusion-convolution operation, diffusion-based representations can be learned from graph structured data and used as an effective basis for node classification.\r\n\r\nDescription and image from: [Diffusion-Convolutional Neural Networks](https://arxiv.org/pdf/1511.02136.pdf)",
"full_name": "Diffusion-Convolutional Neural Networks",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "DCNN",
"source_title": "Diffusion-Convolutional Neural Networks",
"source_url": "http://arxiv.org/abs/1511.02136v6"
}
] | 60,084 |
47,314 | https://paperswithcode.com/paper/near-optimal-bayesian-active-learning-with-1 | 1010.3091 | Near-Optimal Bayesian Active Learning with Noisy Observations | We tackle the fundamental problem of Bayesian active learning with noise,
where we need to adaptively select from a number of expensive tests in order to
identify an unknown hypothesis sampled from a known prior distribution. In the
case of noise-free observations, a greedy algorithm called generalized binary
search (GBS) is known to perform near-optimally. We show that if the
observations are noisy, perhaps surprisingly, GBS can perform very poorly. We
develop EC2, a novel, greedy active learning algorithm and prove that it is
competitive with the optimal policy, thus obtaining the first competitiveness
guarantees for Bayesian active learning with noisy observations. Our bounds
rely on a recently discovered diminishing returns property called adaptive
submodularity, generalizing the classical notion of submodular set functions to
adaptive policies. Our results hold even if the tests have non-uniform cost and
their noise is correlated. We also propose EffECXtive, a particularly fast
approximation of EC2, and evaluate it on a Bayesian experimental design problem
involving human subjects, intended to tease apart competing economic theories
of how people make decisions under uncertainty. | http://arxiv.org/abs/1010.3091v2 | http://arxiv.org/pdf/1010.3091v2.pdf | NeurIPS 2010 12 | [
"Daniel Golovin",
"Andreas Krause",
"Debajyoti Ray"
] | [
"Active Learning",
"Experimental Design"
] | 1,287,100,800,000 | [] | 60,433 |
37,183 | https://paperswithcode.com/paper/clustering-by-deep-nearest-neighbor-descent-d | 1512.02097 | Clustering by Deep Nearest Neighbor Descent (D-NND): A Density-based Parameter-Insensitive Clustering Method | Most density-based clustering methods largely rely on how well the underlying
density is estimated. However, density estimation itself is also a challenging
problem, especially the determination of the kernel bandwidth. A large
bandwidth could lead to the over-smoothed density estimation in which the
number of density peaks could be less than the true clusters, while a small
bandwidth could lead to the under-smoothed density estimation in which spurious
density peaks, or called the "ripple noise", would be generated in the
estimated density. In this paper, we propose a density-based hierarchical
clustering method, called the Deep Nearest Neighbor Descent (D-NND), which
could learn the underlying density structure layer by layer and capture the
cluster structure at the same time. The over-smoothed density estimation could
be largely avoided and the negative effect of the under-estimated cases could
be also largely reduced. Overall, D-NND presents not only the strong capability
of discovering the underlying cluster structure but also the remarkable
reliability due to its insensitivity to parameters. | http://arxiv.org/abs/1512.02097v1 | http://arxiv.org/pdf/1512.02097v1.pdf | null | [
"Teng Qiu",
"YongJie Li"
] | [
"Density Estimation"
] | 1,449,446,400,000 | [] | 25,738 |
43,103 | https://paperswithcode.com/paper/on-color-image-quality-assessment-using | 1411.7682 | On color image quality assessment using natural image statistics | Color distortion can introduce a significant damage in visual quality
perception, however, most of existing reduced-reference quality measures are
designed for grayscale images. In this paper, we consider a basic extension of
well-known image-statistics based quality assessment measures to color images.
In order to evaluate the impact of color information on the measures
efficiency, two color spaces are investigated: RGB and CIELAB. Results of an
extensive evaluation using TID 2013 benchmark demonstrates that significant
improvement can be achieved for a great number of distortion type when the
CIELAB color representation is used. | http://arxiv.org/abs/1411.7682v1 | http://arxiv.org/pdf/1411.7682v1.pdf | null | [
"Mounir Omari",
"Mohammed El Hassouni",
"Hocine Cherifi",
"Abdelkaher Ait Abdelouahad"
] | [
"Image Quality Assessment"
] | 1,417,046,400,000 | [] | 109,594 |
22,880 | https://paperswithcode.com/paper/learning-image-relations-with-contrast | 1705.05665 | Learning Image Relations with Contrast Association Networks | Inferring the relations between two images is an important class of tasks in
computer vision. Examples of such tasks include computing optical flow and
stereo disparity. We treat the relation inference tasks as a machine learning
problem and tackle it with neural networks. A key to the problem is learning a
representation of relations. We propose a new neural network module, contrast
association unit (CAU), which explicitly models the relations between two sets
of input variables. Due to the non-negativity of the weights in CAU, we adopt a
multiplicative update algorithm for learning these weights. Experiments show
that neural networks with CAUs are more effective in learning five fundamental
image transformations than conventional neural networks. | http://arxiv.org/abs/1705.05665v2 | http://arxiv.org/pdf/1705.05665v2.pdf | null | [
"Yao Lu",
"Zhirong Yang",
"Juho Kannala",
"Samuel Kaski"
] | [
"Optical Flow Estimation"
] | 1,494,892,800,000 | [] | 168,881 |
64,578 | https://paperswithcode.com/paper/talla-at-semeval-2018-task-7-hybrid-loss | null | Talla at SemEval-2018 Task 7: Hybrid Loss Optimization for Relation Classification using Convolutional Neural Networks | This paper describes our approach to SemEval-2018 Task 7 {--} given an entity-tagged text from the ACL Anthology corpus, identify and classify pairs of entities that have one of six possible semantic relationships. Our model consists of a convolutional neural network leveraging pre-trained word embeddings, unlabeled ACL-abstracts, and multiple window sizes to automatically learn useful features from entity-tagged sentences. We also experiment with a hybrid loss function, a combination of cross-entropy loss and ranking loss, to boost the separation in classification scores. Lastly, we include WordNet-based features to further improve the performance of our model. Our best model achieves an F1(macro) score of 74.2 and 84.8 on subtasks 1.1 and 1.2, respectively. | https://aclanthology.org/S18-1139 | https://aclanthology.org/S18-1139.pdf | SEMEVAL 2018 6 | [
"Bhanu Pratap",
"Daniel Shank",
"Oladipo Ositelu",
"Byron Galbraith"
] | [
"Feature Engineering",
"Classification",
"Question Answering",
"Relation Classification",
"Word Embeddings"
] | 1,527,811,200,000 | [] | 129,313 |
52,388 | https://paperswithcode.com/paper/deep-pictorial-gaze-estimation | 1807.10002 | Deep Pictorial Gaze Estimation | Estimating human gaze from natural eye images only is a challenging task.
Gaze direction can be defined by the pupil- and the eyeball center where the
latter is unobservable in 2D images. Hence, achieving highly accurate gaze
estimates is an ill-posed problem. In this paper, we introduce a novel deep
neural network architecture specifically designed for the task of gaze
estimation from single eye input. Instead of directly regressing two angles for
the pitch and yaw of the eyeball, we regress to an intermediate pictorial
representation which in turn simplifies the task of 3D gaze direction
estimation. Our quantitative and qualitative results show that our approach
achieves higher accuracies than the state-of-the-art and is robust to variation
in gaze, head pose and image quality. | http://arxiv.org/abs/1807.10002v1 | http://arxiv.org/pdf/1807.10002v1.pdf | ECCV 2018 9 | [
"Seonwook Park",
"Adrian Spurr",
"Otmar Hilliges"
] | [
"Gaze Estimation"
] | 1,532,563,200,000 | [] | 13,556 |
32,819 | https://paperswithcode.com/paper/a-probabilistic-generative-grammar-for | 1606.06361 | A Probabilistic Generative Grammar for Semantic Parsing | Domain-general semantic parsing is a long-standing goal in natural language processing, where the semantic parser is capable of robustly parsing sentences from domains outside of which it was trained. Current approaches largely rely on additional supervision from new domains in order to generalize to those domains. We present a generative model of natural language utterances and logical forms and demonstrate its application to semantic parsing. Our approach relies on domain-independent supervision to generalize to new domains. We derive and implement efficient algorithms for training, parsing, and sentence generation. The work relies on a novel application of hierarchical Dirichlet processes (HDPs) for structured prediction, which we also present in this manuscript. This manuscript is an excerpt of chapter 4 from the Ph.D. thesis of Saparov (2022), where the model plays a central role in a larger natural language understanding system. This manuscript provides a new simplified and more complete presentation of the work first introduced in Saparov, Saraswat, and Mitchell (2017). The description and proofs of correctness of the training algorithm, parsing algorithm, and sentence generation algorithm are much simplified in this new presentation. We also describe the novel application of hierarchical Dirichlet processes for structured prediction. In addition, we extend the earlier work with a new model of word morphology, which utilizes the comprehensive morphological data from Wiktionary. | https://arxiv.org/abs/1606.06361v2 | https://arxiv.org/pdf/1606.06361v2.pdf | CONLL 2017 8 | [
"Abulhair Saparov"
] | [
"Natural Language Understanding",
"Semantic Parsing",
"Structured Prediction"
] | 1,466,380,800,000 | [] | 77,564 |
53,780 | https://paperswithcode.com/paper/hybrid-asp-based-approach-to-pattern-mining | 1808.07302 | Hybrid ASP-based Approach to Pattern Mining | Detecting small sets of relevant patterns from a given dataset is a central
challenge in data mining. The relevance of a pattern is based on user-provided
criteria; typically, all patterns that satisfy certain criteria are considered
relevant. Rule-based languages like Answer Set Programming (ASP) seem
well-suited for specifying such criteria in a form of constraints. Although
progress has been made, on the one hand, on solving individual mining problems
and, on the other hand, developing generic mining systems, the existing methods
either focus on scalability or on generality. In this paper we make steps
towards combining local (frequency, size, cost) and global (various condensed
representations like maximal, closed, skyline) constraints in a generic and
efficient way. We present a hybrid approach for itemset, sequence and graph
mining which exploits dedicated highly optimized mining systems to detect
frequent patterns and then filters the results using declarative ASP. To
further demonstrate the generic nature of our hybrid framework we apply it to a
problem of approximately tiling a database. Experiments on real-world datasets
show the effectiveness of the proposed method and computational gains for
itemset, sequence and graph mining, as well as approximate tiling.
Under consideration in Theory and Practice of Logic Programming (TPLP). | http://arxiv.org/abs/1808.07302v1 | http://arxiv.org/pdf/1808.07302v1.pdf | null | [
"Sergey Paramonov",
"Daria Stepanova",
"Pauli Miettinen"
] | [
"Graph Mining"
] | 1,534,896,000,000 | [] | 175,514 |
101,552 | https://paperswithcode.com/paper/weakly-supervised-image-classification | null | Weakly Supervised Image Classification Through Noise Regularization | Weakly supervised learning is an essential problem in computer vision tasks, such as image classification, object recognition, etc., because it is expected to work in the scenarios where a large dataset with clean labels is not available. While there are a number of studies on weakly supervised image classification, they usually limited to either single-label or multi-label scenarios. In this work, we propose an effective approach for weakly supervised image classification utilizing massive noisy labeled data with only a small set of clean labels (e.g., 5%). The proposed approach consists of a clean net and a residual net, which aim to learn a mapping from feature space to clean label space and a residual mapping from feature space to the residual between clean labels and noisy labels, respectively, in a multi-task learning manner. Thus, the residual net works as a regularization term to improve the clean net training. We evaluate the proposed approach on two multi-label datasets (OpenImage and MS COCO2014) and a single-label dataset (Clothing1M). Experimental results show that the proposed approach outperforms the state-of-the-art methods, and generalizes well to both single-label and multi-label scenarios.
| http://openaccess.thecvf.com/content_CVPR_2019/html/Hu_Weakly_Supervised_Image_Classification_Through_Noise_Regularization_CVPR_2019_paper.html | http://openaccess.thecvf.com/content_CVPR_2019/papers/Hu_Weakly_Supervised_Image_Classification_Through_Noise_Regularization_CVPR_2019_paper.pdf | CVPR 2019 6 | [
"Mengying Hu",
" Hu Han",
" Shiguang Shan",
" Xilin Chen"
] | [
"Classification",
"Classification",
"Image Classification",
"Multi-Task Learning",
"Object Recognition"
] | 1,559,347,200,000 | [] | 22,961 |
245,621 | https://paperswithcode.com/paper/ee-net-exploitation-exploration-neural | 2110.03177 | EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits | In this paper, we propose a novel neural exploration strategy in contextual bandits, EE-Net, distinct from the standard UCB-based and TS-based approaches. Contextual multi-armed bandits have been studied for decades with various applications. To solve the exploitation-exploration tradeoff in bandits, there are three main techniques: epsilon-greedy, Thompson Sampling (TS), and Upper Confidence Bound (UCB). In recent literature, linear contextual bandits have adopted ridge regression to estimate the reward function and combine it with TS or UCB strategies for exploration. However, this line of works explicitly assumes the reward is based on a linear function of arm vectors, which may not be true in real-world datasets. To overcome this challenge, a series of neural bandit algorithms have been proposed, where a neural network is used to learn the underlying reward function and TS or UCB are adapted for exploration. Instead of calculating a large-deviation based statistical bound for exploration like previous methods, we propose "EE-Net", a novel neural-based exploration strategy. In addition to using a neural network (Exploitation network) to learn the reward function, EE-Net uses another neural network (Exploration network) to adaptively learn potential gains compared to the currently estimated reward for exploration. Then, a decision-maker is constructed to combine the outputs from the Exploitation and Exploration networks. We prove that EE-Net can achieve $\mathcal{O}(\sqrt{T\log T})$ regret and show that EE-Net outperforms existing linear and neural contextual bandit baselines on real-world datasets. | https://arxiv.org/abs/2110.03177v8 | https://arxiv.org/pdf/2110.03177v8.pdf | ICLR 2022 4 | [
"Yikun Ban",
"Yuchen Yan",
"Arindam Banerjee",
"Jingrui He"
] | [
"Multi-Armed Bandits"
] | 1,633,564,800,000 | [
{
"code_snippet_url": "https://github.com/mchelali/TemporalStability",
"description": "Spatio-temporal features extraction that measure the stabilty. The proposed method is based on a compression algorithm named Run Length Encoding. The workflow of the method is presented bellow.",
"full_name": "Spatio-temporal stability analysis",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Feature Extractors** for object detection are modules used to construct features that can be used for detecting objects. They address issues such as the need to detect multiple-sized objects in an image (and the need to have representations that are suitable for the different scales).",
"name": "Feature Extractors",
"parent": null
},
"name": "TS",
"source_title": null,
"source_url": null
}
] | 79,977 |
159,465 | https://paperswithcode.com/paper/hey-human-if-your-facial-emotions-are | 2008.07426 | Hey Human, If your Facial Emotions are Uncertain, You Should Use Bayesian Neural Networks! | Facial emotion recognition is the task to classify human emotions in face images. It is a difficult task due to high aleatoric uncertainty and visual ambiguity. A large part of the literature aims to show progress by increasing accuracy on this task, but this ignores the inherent uncertainty and ambiguity in the task. In this paper we show that Bayesian Neural Networks, as approximated using MC-Dropout, MC-DropConnect, or an Ensemble, are able to model the aleatoric uncertainty in facial emotion recognition, and produce output probabilities that are closer to what a human expects. We also show that calibration metrics show strange behaviors for this task, due to the multiple classes that can be considered correct, which motivates future work. We believe our work will motivate other researchers to move away from Classical and into Bayesian Neural Networks. | https://arxiv.org/abs/2008.07426v1 | https://arxiv.org/pdf/2008.07426v1.pdf | null | [
"Maryam Matin",
"Matias Valdenegro-Toro"
] | [
"Emotion Recognition",
"Facial Emotion Recognition"
] | 1,597,622,400,000 | [] | 26,336 |
250,976 | https://paperswithcode.com/paper/incremental-learning-for-animal-pose | 2110.13598 | Incremental Learning for Animal Pose Estimation using RBF k-DPP | Pose estimation is the task of locating keypoints for an object of interest in an image. Animal Pose estimation is more challenging than estimating human pose due to high inter and intra class variability in animals. Existing works solve this problem for a fixed set of predefined animal categories. Models trained on such sets usually do not work well with new animal categories. Retraining the model on new categories makes the model overfit and leads to catastrophic forgetting. Thus, in this work, we propose a novel problem of "Incremental Learning for Animal Pose Estimation". Our method uses an exemplar memory, sampled using Determinantal Point Processes (DPP) to continually adapt to new animal categories without forgetting the old ones. We further propose a new variant of k-DPP that uses RBF kernel (termed as "RBF k-DPP") which gives more gain in performance over traditional k-DPP. Due to memory constraints, the limited number of exemplars along with new class data can lead to class imbalance. We mitigate it by performing image warping as an augmentation technique. This helps in crafting diverse poses, which reduces overfitting and yields further improvement in performance. The efficacy of our proposed approach is demonstrated via extensive experiments and ablations where we obtain significant improvements over state-of-the-art baseline methods. | https://arxiv.org/abs/2110.13598v1 | https://arxiv.org/pdf/2110.13598v1.pdf | null | [
"Gaurav Kumar Nayak",
"Het Shah",
"Anirban Chakraborty"
] | [
"Animal Pose Estimation",
"Incremental Learning",
"Point Processes",
"Pose Estimation"
] | 1,635,206,400,000 | [] | 64,279 |
176,210 | https://paperswithcode.com/paper/unsupervised-domain-adaptation-of-a | 2011.11499 | Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model | Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements over various cross-lingual and low-resource tasks. Through training on one hundred languages and terabytes of texts, cross-lingual language models have proven to be effective in leveraging high-resource languages to enhance low-resource language processing and outperform monolingual models. In this paper, we further investigate the cross-lingual and cross-domain (CLCD) setting when a pretrained cross-lingual language model needs to adapt to new domains. Specifically, we propose a novel unsupervised feature decomposition method that can automatically extract domain-specific features and domain-invariant features from the entangled pretrained cross-lingual representations, given unlabeled raw texts in the source language. Our proposed model leverages mutual information estimation to decompose the representations computed by a cross-lingual model into domain-invariant and domain-specific parts. Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting. The source code of this paper is publicly available at https://github.com/lijuntaopku/UFD. | https://arxiv.org/abs/2011.11499v1 | https://arxiv.org/pdf/2011.11499v1.pdf | null | [
"Juntao Li",
"Ruidan He",
"Hai Ye",
"Hwee Tou Ng",
"Lidong Bing",
"Rui Yan"
] | [
"Domain Adaptation",
"Language Modelling",
"Mutual Information Estimation",
"Unsupervised Domain Adaptation"
] | 1,606,089,600,000 | [] | 46,942 |
132,011 | https://paperswithcode.com/paper/a-framework-for-large-scale-mapping-of-human | 2001.11935 | A framework for large-scale mapping of human settlement extent from Sentinel-2 images via fully convolutional neural networks | Human settlement extent (HSE) information is a valuable indicator of world-wide urbanization as well as the resulting human pressure on the natural environment. Therefore, mapping HSE is critical for various environmental issues at local, regional, and even global scales. This paper presents a deep-learning-based framework to automatically map HSE from multi-spectral Sentinel-2 data using regionally available geo-products as training labels. A straightforward, simple, yet effective fully convolutional network-based architecture, Sen2HSE, is implemented as an example for semantic segmentation within the framework. The framework is validated against both manually labelled checking points distributed evenly over the test areas, and the OpenStreetMap building layer. The HSE mapping results were extensively compared to several baseline products in order to thoroughly evaluate the effectiveness of the proposed HSE mapping framework. The HSE mapping power is consistently demonstrated over 10 representative areas across the world. We also present one regional-scale and one country-wide HSE mapping example from our framework to show the potential for upscaling. The results of this study contribute to the generalization of the applicability of CNN-based approaches for large-scale urban mapping to cases where no up-to-date and accurate ground truth is available, as well as the subsequent monitor of global urbanization. | https://arxiv.org/abs/2001.11935v1 | https://arxiv.org/pdf/2001.11935v1.pdf | null | [
"C. Qiu",
"M. Schmitt",
"C. Geiss",
"T. K. Chen",
"X. X. Zhu"
] | [
"Semantic Segmentation"
] | 1,580,428,800,000 | [] | 168,489 |
8,109 | https://paperswithcode.com/paper/forecasting-economics-and-financial-time | 1803.06386 | Forecasting Economics and Financial Time Series: ARIMA vs. LSTM | Forecasting time series data is an important subject in economics, business,
and finance. Traditionally, there are several techniques to effectively
forecast the next lag of time series data such as univariate Autoregressive
(AR), univariate Moving Average (MA), Simple Exponential Smoothing (SES), and
more notably Autoregressive Integrated Moving Average (ARIMA) with its many
variations. In particular, ARIMA model has demonstrated its outperformance in
precision and accuracy of predicting the next lags of time series. With the
recent advancement in computational power of computers and more importantly
developing more advanced machine learning algorithms and approaches such as
deep learning, new algorithms are developed to forecast time series data. The
research question investigated in this article is that whether and how the
newly developed deep learning-based algorithms for forecasting time series
data, such as "Long Short-Term Memory (LSTM)", are superior to the traditional
algorithms. The empirical studies conducted and reported in this article show
that deep learning-based algorithms such as LSTM outperform traditional-based
algorithms such as ARIMA model. More specifically, the average reduction in
error rates obtained by LSTM is between 84 - 87 percent when compared to ARIMA
indicating the superiority of LSTM to ARIMA. Furthermore, it was noticed that
the number of training times, known as "epoch" in deep learning, has no effect
on the performance of the trained forecast model and it exhibits a truly random
behavior. | http://arxiv.org/abs/1803.06386v1 | http://arxiv.org/pdf/1803.06386v1.pdf | null | [
"Sima Siami-Namini",
"Akbar Siami Namin"
] | [
"Time Series"
] | 1,521,158,400,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] | 49,388 |
220,828 | https://paperswithcode.com/paper/continual-learning-for-real-world-autonomous | 2105.12374 | Continual Learning for Real-World Autonomous Systems: Algorithms, Challenges and Frameworks | Continual learning is essential for all real-world applications, as frozen pre-trained models cannot effectively deal with non-stationary data distributions. The purpose of this study is to review the state-of-the-art methods that allow continuous learning of computational models over time. We primarily focus on the learning algorithms that perform continuous learning in an online fashion from considerably large (or infinite) sequential data and require substantially low computational and memory resources. We critically analyze the key challenges associated with continual learning for autonomous real-world systems and compare current methods in terms of computations, memory, and network/model complexity. We also briefly describe the implementations of continuous learning algorithms under three main autonomous systems, i.e., self-driving vehicles, unmanned aerial vehicles, and urban robots. The learning methods of these autonomous systems and their strengths and limitations are extensively explored in this article. | https://arxiv.org/abs/2105.12374v2 | https://arxiv.org/pdf/2105.12374v2.pdf | null | [
"Khadija Shaheen",
"Muhammad Abdullah Hanif",
"Osman Hasan",
"Muhammad Shafique"
] | [
"Continual Learning"
] | 1,621,987,200,000 | [] | 69,857 |
246,928 | https://paperswithcode.com/paper/scene-transformer-a-unified-architecture-for | null | Scene Transformer: A unified architecture for predicting future trajectories of multiple agents | Predicting the motion of multiple agents is necessary for planning in dynamic environments. This task is challenging for autonomous driving since agents (e.g., vehicles and pedestrians) and their associated behaviors may be diverse and influence one another. Most prior work have focused on predicting independent futures for each agent based on all past motion, and planning against these independent predictions. However, planning against independent predictions can make it challenging to represent the future interaction possibilities between different agents, leading to sub-optimal planning. In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents. Inspired by recent language modeling approaches, we use a masking strategy as the query to our model, enabling one to invoke a single model to predict agent behavior in many ways, such as potentially conditioned on the goal or full future trajectory of the autonomous vehicle or the behavior of other agents in the environment. Our model architecture employs attention to combine features across road elements, agent interactions, and time steps. We evaluate our approach on autonomous driving datasets for both marginal and joint motion prediction, and achieve state of the art performance across two popular datasets. Through combining a scene-centric approach, agent permutation equivariant model, and a sequence masking strategy, we show that our model can unify a variety of motion prediction tasks from joint motion predictions to conditioned prediction. | https://openreview.net/forum?id=Wm3EA5OlHsG | https://openreview.net/pdf?id=Wm3EA5OlHsG | ICLR 2022 4 | [
"Jiquan Ngiam",
"Vijay Vasudevan",
"Benjamin Caine",
"Zhengdong Zhang",
"Hao-Tien Lewis Chiang",
"Jeffrey Ling",
"Rebecca Roelofs",
"Alex Bewley",
"Chenxi Liu",
"Ashish Venugopal",
"David J Weiss",
"Ben Sapp",
"Zhifeng Chen",
"Jonathon Shlens"
] | [
"Autonomous Driving",
"Language Modelling",
"motion prediction"
] | 1,632,873,600,000 | [] | 51,421 |
273,823 | https://paperswithcode.com/paper/interactive-image-inpainting-using-semantic | 2201.10753 | Interactive Image Inpainting Using Semantic Guidance | Image inpainting approaches have achieved significant progress with the help of deep neural networks. However, existing approaches mainly focus on leveraging the priori distribution learned by neural networks to produce a single inpainting result or further yielding multiple solutions, where the controllability is not well studied. This paper develops a novel image inpainting approach that enables users to customize the inpainting result by their own preference or memory. Specifically, our approach is composed of two stages that utilize the prior of neural network and user's guidance to jointly inpaint corrupted images. In the first stage, an autoencoder based on a novel external spatial attention mechanism is deployed to produce reconstructed features of the corrupted image and a coarse inpainting result that provides semantic mask as the medium for user interaction. In the second stage, a semantic decoder that takes the reconstructed features as prior is adopted to synthesize a fine inpainting result guided by user's customized semantic mask, so that the final inpainting result will share the same content with user's guidance while the textures and colors reconstructed in the first stage are preserved. Extensive experiments demonstrate the superiority of our approach in terms of inpainting quality and controllability. | https://arxiv.org/abs/2201.10753v1 | https://arxiv.org/pdf/2201.10753v1.pdf | null | [
"Wangbo Yu",
"Jinhao Du",
"Ruixin Liu",
"Yixuan Li",
"Yuesheng Zhu"
] | [
"Image Inpainting"
] | 1,643,155,200,000 | [
{
"code_snippet_url": "",
"description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.",
"full_name": "Inpainting",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.",
"name": "Self-Supervised Learning",
"parent": null
},
"name": "Inpainting",
"source_title": "Context Encoders: Feature Learning by Inpainting",
"source_url": "http://arxiv.org/abs/1604.07379v2"
},
{
"code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38",
"description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).\r\n\r\nImage: [Michael Massi](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)",
"full_name": "AutoEncoder",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "AutoEncoder",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] | 136,368 |
314,008 | https://paperswithcode.com/paper/dsr-towards-drone-image-super-resolution | 2208.12327 | DSR: Towards Drone Image Super-Resolution | Despite achieving remarkable progress in recent years, single-image super-resolution methods are developed with several limitations. Specifically, they are trained on fixed content domains with certain degradations (whether synthetic or real). The priors they learn are prone to overfitting the training configuration. Therefore, the generalization to novel domains such as drone top view data, and across altitudes, is currently unknown. Nonetheless, pairing drones with proper image super-resolution is of great value. It would enable drones to fly higher covering larger fields of view, while maintaining a high image quality. To answer these questions and pave the way towards drone image super-resolution, we explore this application with particular focus on the single-image case. We propose a novel drone image dataset, with scenes captured at low and high resolutions, and across a span of altitudes. Our results show that off-the-shelf state-of-the-art networks witness a significant drop in performance on this different domain. We additionally show that simple fine-tuning, and incorporating altitude awareness into the network's architecture, both improve the reconstruction performance. | https://arxiv.org/abs/2208.12327v1 | https://arxiv.org/pdf/2208.12327v1.pdf | null | [
"Xiaoyu Lin",
"Baran Ozaydin",
"Vidit Vidit",
"Majed El Helou",
"Sabine Süsstrunk"
] | [
"Image Super-Resolution",
"Single Image Super Resolution",
"Super-Resolution"
] | 1,661,385,600,000 | [] | 48,034 |
38,791 | https://paperswithcode.com/paper/better-document-level-sentiment-analysis-from | 1509.01599 | Better Document-level Sentiment Analysis from RST Discourse Parsing | Discourse structure is the hidden link between surface features and
document-level properties, such as sentiment polarity. We show that the
discourse analyses produced by Rhetorical Structure Theory (RST) parsers can
improve document-level sentiment analysis, via composition of local information
up the discourse tree. First, we show that reweighting discourse units
according to their position in a dependency representation of the rhetorical
structure can yield substantial improvements on lexicon-based sentiment
analysis. Next, we present a recursive neural network over the RST structure,
which offers significant improvements over classification-based methods. | http://arxiv.org/abs/1509.01599v2 | http://arxiv.org/pdf/1509.01599v2.pdf | EMNLP 2015 9 | [
"Parminder Bhatia",
"Yangfeng Ji",
"Jacob Eisenstein"
] | [
"Discourse Parsing",
"Classification",
"Sentiment Analysis"
] | 1,441,324,800,000 | [] | 71,575 |
135,677 | https://paperswithcode.com/paper/scaling-up-multiagent-reinforcement-learning | 2003.01040 | Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph | The complexity of multiagent reinforcement learning (MARL) in multiagent systems increases exponentially with respect to the agent number. This scalability issue prevents MARL from being applied in large-scale multiagent systems. However, one critical feature in MARL that is often neglected is that the interactions between agents are quite sparse. Without exploiting this sparsity structure, existing works aggregate information from all of the agents and thus have a high sample complexity. To address this issue, we propose an adaptive sparse attention mechanism by generalizing a sparsity-inducing activation function. Then a sparse communication graph in MARL is learned by graph neural networks based on this new attention mechanism. Through this sparsity structure, the agents can communicate in an effective as well as efficient way via only selectively attending to agents that matter the most and thus the scale of the MARL problem is reduced with little optimality compromised. Comparative results show that our algorithm can learn an interpretable sparse structure and outperforms previous works by a significant margin on applications involving a large-scale multiagent system. | https://arxiv.org/abs/2003.01040v2 | https://arxiv.org/pdf/2003.01040v2.pdf | null | [
"Chuangchuang Sun",
"Macheng Shen",
"Jonathan P. How"
] | [
"reinforcement-learning"
] | 1,583,107,200,000 | [] | 13,985 |
146,739 | https://paperswithcode.com/paper/a-french-corpus-for-semantic-similarity | null | A French Corpus for Semantic Similarity | Semantic similarity is an area of Natural Language Processing that is useful for several downstream applications, such as machine translation, natural language generation, information retrieval, or question answering. The task consists in assessing the extent to which two sentences express or do not express the same meaning. To do so, corpora with graded pairs of sentences are required. The grade is positioned on a given scale, usually going from 0 (completely unrelated) to 5 (equivalent semantics). In this work, we introduce such a corpus for French, the first that we know of. It is comprised of 1,010 sentence pairs with grades from five annotators. We describe the annotation process, analyse these data, and perform a few experiments for the automatic grading of semantic similarity. | https://aclanthology.org/2020.lrec-1.851 | https://aclanthology.org/2020.lrec-1.851.pdf | LREC 2020 5 | [
"R{\\'e}mi Cardon",
"Natalia Grabar"
] | [
"Information Retrieval",
"Machine Translation",
"Question Answering",
"Semantic Similarity",
"Semantic Textual Similarity",
"Text Generation"
] | 1,588,291,200,000 | [] | 172,604 |
301,992 | https://paperswithcode.com/paper/generalised-bayesian-inference-for-discrete | 2206.08420 | Generalised Bayesian Inference for Discrete Intractable Likelihood | Discrete state spaces represent a major computational challenge to statistical inference, since the computation of normalisation constants requires summation over large or possibly infinite sets, which can be impractical. This paper addresses this computational challenge through the development of a novel generalised Bayesian inference procedure suitable for discrete intractable likelihood. Inspired by recent methodological advances for continuous data, the main idea is to update beliefs about model parameters using a discrete Fisher divergence, in lieu of the problematic intractable likelihood. The result is a generalised posterior that can be sampled using standard computational tools, such as Markov chain Monte Carlo, circumventing the intractable normalising constant. The statistical properties of the generalised posterior are analysed, with sufficient conditions for posterior consistency and asymptotic normality established. In addition, a novel and general approach to calibration of generalised posteriors is proposed. Applications are presented on lattice models for discrete spatial data and on multivariate models for count data, where in each case the methodology facilitates generalised Bayesian inference at low computational cost. | https://arxiv.org/abs/2206.08420v1 | https://arxiv.org/pdf/2206.08420v1.pdf | null | [
"Takuo Matsubara",
"Jeremias Knoblauch",
"François-Xavier Briol",
"Chris. J. Oates"
] | [
"Bayesian Inference"
] | 1,655,337,600,000 | [] | 17,401 |