uid
int64 4
318k
| paper_url
stringlengths 39
81
| arxiv_id
stringlengths 9
16
⌀ | title
stringlengths 6
365
| abstract
stringlengths 0
7.27k
| url_abs
stringlengths 17
601
| url_pdf
stringlengths 21
819
| proceeding
stringlengths 7
1.03k
⌀ | authors
sequence | tasks
sequence | date
float64 422B
1,672B
⌀ | methods
list | __index_level_0__
int64 1
197k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
95,392 | https://paperswithcode.com/paper/unsupervised-person-re-identification-by-soft | 1903.06325 | Unsupervised Person Re-identification by Soft Multilabel Learning | Although unsupervised person re-identification (RE-ID) has drawn increasing
research attentions due to its potential to address the scalability problem of
supervised RE-ID models, it is very challenging to learn discriminative
information in the absence of pairwise labels across disjoint camera views. To
overcome this problem, we propose a deep model for the soft multilabel learning
for unsupervised RE-ID. The idea is to learn a soft multilabel (real-valued
label likelihood vector) for each unlabeled person by comparing (and
representing) the unlabeled person with a set of known reference persons from
an auxiliary domain. We propose the soft multilabel-guided hard negative mining
to learn a discriminative embedding for the unlabeled target domain by
exploring the similarity consistency of the visual features and the soft
multilabels of unlabeled target pairs. Since most target pairs are cross-view
pairs, we develop the cross-view consistent soft multilabel learning to achieve
the learning goal that the soft multilabels are consistently good across
different camera views. To enable effecient soft multilabel learning, we
introduce the reference agent learning to represent each reference person by a
reference agent in a joint embedding. We evaluate our unified deep model on
Market-1501 and DukeMTMC-reID. Our model outperforms the state-of-the-art
unsupervised RE-ID methods by clear margins. Code is available at
https://github.com/KovenYu/MAR. | http://arxiv.org/abs/1903.06325v2 | http://arxiv.org/pdf/1903.06325v2.pdf | CVPR 2019 6 | [
"Hong-Xing Yu",
"Wei-Shi Zheng",
"An-Cong Wu",
"Xiaowei Guo",
"Shaogang Gong",
"Jian-Huang Lai"
] | [
"Person Re-Identification",
"Unsupervised Person Re-Identification"
] | 1,552,608,000,000 | [] | 164,864 |
124,490 | https://paperswithcode.com/paper/resunet-an-advanced-architecture-for-medical | 1911.07067 | ResUNet++: An Advanced Architecture for Medical Image Segmentation | Accurate computer-aided polyp detection and segmentation during colonoscopy examinations can help endoscopists resect abnormal tissue and thereby decrease chances of polyps growing into cancer. Towards developing a fully automated model for pixel-wise polyp segmentation, we propose ResUNet++, which is an improved ResUNet architecture for colonoscopic image segmentation. Our experimental evaluations show that the suggested architecture produces good segmentation results on publicly available datasets. Furthermore, ResUNet++ significantly outperforms U-Net and ResUNet, two key state-of-the-art deep learning architectures, by achieving high evaluation scores with a dice coefficient of 81.33%, and a mean Intersection over Union (mIoU) of 79.27% for the Kvasir-SEG dataset and a dice coefficient of 79.55%, and a mIoU of 79.62% with CVC-612 dataset. | https://arxiv.org/abs/1911.07067v1 | https://arxiv.org/pdf/1911.07067v1.pdf | null | [
"Debesh Jha",
"Pia H. Smedsrud",
"Michael A. Riegler",
"Dag Johansen",
"Thomas de Lange",
"Pal Halvorsen",
"Havard D. Johansen"
] | [
"Colorectal Polyps Characterization",
"Image Segmentation",
"Medical Image Segmentation",
"Polyp Segmentation",
"Semantic Segmentation"
] | 1,573,862,400,000 | [
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] | 137,548 |
279,543 | https://paperswithcode.com/paper/exploring-the-unfairness-of-dp-sgd-across | 2202.12058 | Exploring the Unfairness of DP-SGD Across Settings | End users and regulators require private and fair artificial intelligence models, but previous work suggests these objectives may be at odds. We use the CivilComments to evaluate the impact of applying the {\em de facto} standard approach to privacy, DP-SGD, across several fairness metrics. We evaluate three implementations of DP-SGD: for dimensionality reduction (PCA), linear classification (logistic regression), and robust deep learning (Group-DRO). We establish a negative, logarithmic correlation between privacy and fairness in the case of linear classification and robust deep learning. DP-SGD had no significant impact on fairness for PCA, but upon inspection, also did not seem to lead to private representations. | https://arxiv.org/abs/2202.12058v1 | https://arxiv.org/pdf/2202.12058v1.pdf | null | [
"Frederik Noe",
"Rasmus Herskind",
"Anders Søgaard"
] | [
"Classification",
"Dimensionality Reduction",
"Fairness"
] | 1,645,660,800,000 | [
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] | 110,758 |
222,913 | https://paperswithcode.com/paper/spectral-temporal-graph-neural-network-for | 2106.02930 | Spectral Temporal Graph Neural Network for Trajectory Prediction | An effective understanding of the contextual environment and accurate motion forecasting of surrounding agents is crucial for the development of autonomous vehicles and social mobile robots. This task is challenging since the behavior of an autonomous agent is not only affected by its own intention, but also by the static environment and surrounding dynamically interacting agents. Previous works focused on utilizing the spatial and temporal information in time domain while not sufficiently taking advantage of the cues in frequency domain. To this end, we propose a Spectral Temporal Graph Neural Network (SpecTGNN), which can capture inter-agent correlations and temporal dependency simultaneously in frequency domain in addition to time domain. SpecTGNN operates on both an agent graph with dynamic state information and an environment graph with the features extracted from context images in two streams. The model integrates graph Fourier transform, spectral graph convolution and temporal gated convolution to encode history information and forecast future trajectories. Moreover, we incorporate a multi-head spatio-temporal attention mechanism to mitigate the effect of error propagation in a long time horizon. We demonstrate the performance of SpecTGNN on two public trajectory prediction benchmark datasets, which achieves state-of-the-art performance in terms of prediction accuracy. | https://arxiv.org/abs/2106.02930v1 | https://arxiv.org/pdf/2106.02930v1.pdf | null | [
"Defu Cao",
"Jiachen Li",
"Hengbo Ma",
"Masayoshi Tomizuka"
] | [
"Autonomous Vehicles",
"Motion Forecasting",
"Trajectory Prediction"
] | 1,622,851,200,000 | [
{
"code_snippet_url": "https://www.healthnutra.org/es/maxup/",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L551",
"description": "A **Gated Linear Unit**, or **GLU** computes:\r\n\r\n$$ \\text{GLU}\\left(a, b\\right) = a\\otimes \\sigma\\left(b\\right) $$\r\n\r\nIt is used in natural language processing architectures, for example the [Gated CNN](https://paperswithcode.com/method/gated-convolution-network), because here $b$ is the gate that control what information from $a$ is passed up to the following layer. Intuitively, for a language modeling task, the gating mechanism allows selection of words or features that are important for predicting the next word. The GLU also has non-linear capabilities, but has a linear path for the gradient so diminishes the vanishing gradient problem.",
"full_name": "Gated Linear Unit",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GLU",
"source_title": "Language Modeling with Gated Convolutional Networks",
"source_url": "http://arxiv.org/abs/1612.08083v3"
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Gated Convolution** is a type of temporal [convolution](https://paperswithcode.com/method/convolution) with a gating mechanism. Zero-padding is used to ensure that future context can not be seen.",
"full_name": "Gated Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Temporal Convolutions",
"parent": null
},
"name": "Gated Convolution",
"source_title": "Language Modeling with Gated Convolutional Networks",
"source_url": "http://arxiv.org/abs/1612.08083v3"
}
] | 123,478 |
19,415 | https://paperswithcode.com/paper/recurrent-neural-network-based-modeling-of | 1509.03221 | Recurrent Neural Network Based Modeling of Gene Regulatory Network Using Bat Algorithm | Correct inference of genetic regulations inside a cell is one of the greatest
challenges in post genomic era for the biologist and researchers. Several
intelligent techniques and models were already proposed to identify the
regulatory relations among genes from the biological database like time series
microarray data. Recurrent Neural Network (RNN) is one of the most popular and
simple approach to model the dynamics as well as to infer correct dependencies
among genes. In this paper, Bat Algorithm (BA) is applied to optimize the model
parameters of RNN model of Gene Regulatory Network (GRN). Initially the
proposed method is tested against small artificial network without any noise
and the efficiency is observed in term of number of iteration, number of
population and BA optimization parameters. The model is also validated in
presence of different level of random noise for the small artificial network
and that proved its ability to infer the correct inferences in presence of
noise like real world dataset. In the next phase of this research, BA based RNN
is applied to real world benchmark time series microarray dataset of E. coli.
The results prove that it can able to identify the maximum number of true
positive regulation but also include some false positive regulations.
Therefore, BA is very suitable for identifying biological plausible GRN with
the help RNN model. | http://arxiv.org/abs/1509.03221v2 | http://arxiv.org/pdf/1509.03221v2.pdf | null | [
"Sudip Mandal",
"Goutam Saha",
"Rajat K. Pal"
] | [
"Time Series"
] | 1,440,115,200,000 | [] | 16,639 |
51,894 | https://paperswithcode.com/paper/planning-and-synthesis-under-assumptions | 1807.06777 | Planning and Synthesis Under Assumptions | In Reasoning about Action and Planning, one synthesizes the agent plan by taking advantage of the assumption on how the environment works (that is, one exploits the environment's effects, its fairness, its trajectory constraints). In this paper we study this form of synthesis in detail. We consider assumptions as constraints on the possible strategies that the environment can have in order to respond to the agent's actions. Such constraints may be given in the form of a planning domain (or action theory), as linear-time formulas over infinite or finite runs, or as a combination of the two. We argue though that not all assumption specifications are meaningful: they need to be consistent, which means that there must exist an environment strategy fulfilling the assumption in spite of the agent actions. For such assumptions, we study how to do synthesis/planning for agent goals, ranging from a classical reachability to goal on traces specified in \LTL and \LTLf/\LDLf, characterizing the problem both mathematically and algorithmically. | https://arxiv.org/abs/1807.06777v2 | https://arxiv.org/pdf/1807.06777v2.pdf | null | [
"Benjamin Aminof",
"Giuseppe De Giacomo",
"Aniello Murano",
"Sasha Rubin"
] | [
"Fairness"
] | 1,531,872,000,000 | [] | 99,829 |
304,165 | https://paperswithcode.com/paper/understanding-instance-level-impact-of | 2206.15437 | Understanding Instance-Level Impact of Fairness Constraints | A variety of fairness constraints have been proposed in the literature to mitigate group-level statistical bias. Their impacts have been largely evaluated for different groups of populations corresponding to a set of sensitive attributes, such as race or gender. Nonetheless, the community has not observed sufficient explorations for how imposing fairness constraints fare at an instance level. Building on the concept of influence function, a measure that characterizes the impact of a training example on the target model and its predictive performance, this work studies the influence of training examples when fairness constraints are imposed. We find out that under certain assumptions, the influence function with respect to fairness constraints can be decomposed into a kernelized combination of training examples. One promising application of the proposed fairness influence function is to identify suspicious training examples that may cause model discrimination by ranking their influence scores. We demonstrate with extensive experiments that training on a subset of weighty data examples leads to lower fairness violations with a trade-off of accuracy. | https://arxiv.org/abs/2206.15437v1 | https://arxiv.org/pdf/2206.15437v1.pdf | null | [
"Jialu Wang",
"Xin Eric Wang",
"Yang Liu"
] | [
"Fairness"
] | 1,656,547,200,000 | [] | 37,998 |
153,848 | https://paperswithcode.com/paper/deep-learning-for-anomaly-detection-a-review | 2007.02500 | Deep Learning for Anomaly Detection: A Review | Anomaly detection, a.k.a. outlier detection or novelty detection, has been a lasting yet active research area in various research communities for several decades. There are still some unique problem complexities and challenges that require advanced approaches. In recent years, deep learning enabled anomaly detection, i.e., deep anomaly detection, has emerged as a critical direction. This paper surveys the research of deep anomaly detection with a comprehensive taxonomy, covering advancements in three high-level categories and 11 fine-grained categories of the methods. We review their key intuitions, objective functions, underlying assumptions, advantages and disadvantages, and discuss how they address the aforementioned challenges. We further discuss a set of possible future opportunities and new perspectives on addressing the challenges. | https://arxiv.org/abs/2007.02500v3 | https://arxiv.org/pdf/2007.02500v3.pdf | null | [
"Guansong Pang",
"Chunhua Shen",
"Longbing Cao",
"Anton Van Den Hengel"
] | [
"Anomaly Detection",
"Outlier Detection"
] | 1,593,993,600,000 | [] | 20,349 |
124,915 | https://paperswithcode.com/paper/demystifying-tasnet-a-dissecting-approach | 1911.08895 | Demystifying TasNet: A Dissecting Approach | In recent years time domain speech separation has excelled over frequency domain separation in single channel scenarios and noise-free environments. In this paper we dissect the gains of the time-domain audio separation network (TasNet) approach by gradually replacing components of an utterance-level permutation invariant training (u-PIT) based separation system in the frequency domain until the TasNet system is reached, thus blending components of frequency domain approaches with those of time domain approaches. Some of the intermediate variants achieve comparable signal-to-distortion ratio (SDR) gains to TasNet, but retain the advantage of frequency domain processing: compatibility with classic signal processing tools such as frequency-domain beamforming and the human interpretability of the masks. Furthermore, we show that the scale invariant signal-to-distortion ratio (si-SDR) criterion used as loss function in TasNet is related to a logarithmic mean square error criterion and that it is this criterion which contributes most reliable to the performance advantage of TasNet. Finally, we critically assess which gains in a noise-free single channel environment generalize to more realistic reverberant conditions. | https://arxiv.org/abs/1911.08895v2 | https://arxiv.org/pdf/1911.08895v2.pdf | null | [
"Jens Heitkaemper",
"Darius Jakobeit",
"Christoph Boeddeker",
"Lukas Drude",
"Reinhold Haeb-Umbach"
] | [
"Speech Separation"
] | 1,574,208,000,000 | [
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "Interpretability",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Interpretability",
"source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression",
"source_url": "http://arxiv.org/abs/1310.1533v2"
}
] | 4,307 |
197,638 | https://paperswithcode.com/paper/overview-and-evaluation-of-sound-event | 2009.02792 | Overview and Evaluation of Sound Event Localization and Detection in DCASE 2019 | Sound event localization and detection is a novel area of research that emerged from the combined interest of analyzing the acoustic scene in terms of the spatial and temporal activity of sounds of interest. This paper presents an overview of the first international evaluation on sound event localization and detection, organized as a task of the DCASE 2019 Challenge. A large-scale realistic dataset of spatialized sound events was generated for the challenge, to be used for training of learning-based approaches, and for evaluation of the submissions in an unlabeled subset. The overview presents in detail how the systems were evaluated and ranked and the characteristics of the best-performing systems. Common strategies in terms of input features, model architectures, training approaches, exploitation of prior knowledge, and data augmentation are discussed. Since ranking in the challenge was based on individually evaluating localization and event classification performance, part of the overview focuses on presenting metrics for the joint measurement of the two, together with a reevaluation of submissions using these new metrics. The new analysis reveals submissions that performed better on the joint task of detecting the correct type of event close to its original location than some of the submissions that were ranked higher in the challenge. Consequently, ranking of submissions which performed strongly when evaluated separately on detection or localization, but not jointly on both, was affected negatively. | https://arxiv.org/abs/2009.02792v2 | https://arxiv.org/pdf/2009.02792v2.pdf | null | [
"Archontis Politis",
"Annamaria Mesaros",
"Sharath Adavanne",
"Toni Heittola",
"Tuomas Virtanen"
] | [
"Data Augmentation",
"Sound Event Localization and Detection"
] | 1,599,350,400,000 | [] | 58,217 |
197,716 | https://paperswithcode.com/paper/neural-news-recommendation-with-negative | 2101.04328 | Neural News Recommendation with Negative Feedback | News recommendation is important for online news services. Precise user interest modeling is critical for personalized news recommendation. Existing news recommendation methods usually rely on the implicit feedback of users like news clicks to model user interest. However, news click may not necessarily reflect user interests because users may click a news due to the attraction of its title but feel disappointed at its content. The dwell time of news reading is an important clue for user interest modeling, since short reading dwell time usually indicates low and even negative interest. Thus, incorporating the negative feedback inferred from the dwell time of news reading can improve the quality of user modeling. In this paper, we propose a neural news recommendation approach which can incorporate the implicit negative user feedback. We propose to distinguish positive and negative news clicks according to their reading dwell time, and respectively learn user representations from positive and negative news clicks via a combination of Transformer and additive attention network. In addition, we propose to compute a positive click score and a negative click score based on the relevance between candidate news representations and the user representations learned from the positive and negative news clicks. The final click score is a combination of positive and negative click scores. Besides, we propose an interactive news modeling method to consider the relatedness between title and body in news modeling. Extensive experiments on real-world dataset validate that our approach can achieve more accurate user interest modeling for news recommendation. | https://arxiv.org/abs/2101.04328v1 | https://arxiv.org/pdf/2101.04328v1.pdf | null | [
"Chuhan Wu",
"Fangzhao Wu",
"Yongfeng Huang",
"Xing Xie"
] | [
"News Recommendation"
] | 1,610,409,600,000 | [
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/intelpro/trajectory/blob/967131ca5c16af5f6ab09fe724eae4077e1e4596/trajectron/model/components/additive_attention.py#L6",
"description": "**Additive Attention**, also known as **Bahdanau Attention**, uses a one-hidden layer feed-forward network to calculate the attention alignment score:\r\n\r\n$$f_{att}\\left(\\textbf{h}_{i}, \\textbf{s}\\_{j}\\right) = v\\_{a}^{T}\\tanh\\left(\\textbf{W}\\_{a}\\left[\\textbf{h}\\_{i};\\textbf{s}\\_{j}\\right]\\right)$$\r\n\r\nwhere $\\textbf{v}\\_{a}$ and $\\textbf{W}\\_{a}$ are learned attention parameters. Here $\\textbf{h}$ refers to the hidden states for the encoder, and $\\textbf{s}$ is the hidden states for the decoder. The function above is thus a type of alignment score function. We can use a matrix of alignment scores to show the correlation between source and target words, as the Figure to the right shows.\r\n\r\nWithin a neural network, once we have the alignment scores, we calculate the final scores using a [softmax](https://paperswithcode.com/method/softmax) function of these alignment scores (ensuring it sums to 1).",
"full_name": "Additive Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Additive Attention",
"source_title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"source_url": "http://arxiv.org/abs/1409.0473v7"
}
] | 192,209 |
289,432 | https://paperswithcode.com/paper/less-is-more-learning-to-refine-dialogue-1 | 2204.08128 | Less is More: Learning to Refine Dialogue History for Personalized Dialogue Generation | Personalized dialogue systems explore the problem of generating responses that are consistent with the user's personality, which has raised much attention in recent years. Existing personalized dialogue systems have tried to extract user profiles from dialogue history to guide personalized response generation. Since the dialogue history is usually long and noisy, most existing methods truncate the dialogue history to model the user's personality. Such methods can generate some personalized responses, but a large part of dialogue history is wasted, leading to sub-optimal performance of personalized response generation. In this work, we propose to refine the user dialogue history on a large scale, based on which we can handle more dialogue history and obtain more abundant and accurate persona information. Specifically, we design an MSP model which consists of three personal information refiners and a personalized response generator. With these multi-level refiners, we can sparsely extract the most valuable information (tokens) from the dialogue history and leverage other similar users' data to enhance personalization. Experimental results on two real-world datasets demonstrate the superiority of our model in generating more informative and personalized responses. | https://arxiv.org/abs/2204.08128v1 | https://arxiv.org/pdf/2204.08128v1.pdf | NAACL 2022 7 | [
"Hanxun Zhong",
"Zhicheng Dou",
"Yutao Zhu",
"Hongjin Qian",
"Ji-Rong Wen"
] | [
"Dialogue Generation",
"Response Generation"
] | 1,650,240,000,000 | [] | 144,139 |
6,346 | https://paperswithcode.com/paper/viewpoint-aware-video-summarization | 1804.02843 | Viewpoint-aware Video Summarization | This paper introduces a novel variant of video summarization, namely building
a summary that depends on the particular aspect of a video the viewer focuses
on. We refer to this as $\textit{viewpoint}$. To infer what the desired
$\textit{viewpoint}$ may be, we assume that several other videos are available,
especially groups of videos, e.g., as folders on a person's phone or laptop.
The semantic similarity between videos in a group vs. the dissimilarity between
groups is used to produce $\textit{viewpoint}$-specific summaries. For
considering similarity as well as avoiding redundancy, output summary should be
(A) diverse, (B) representative of videos in the same group, and (C)
discriminative against videos in the different groups. To satisfy these
requirements (A)-(C) simultaneously, we proposed a novel video summarization
method from multiple groups of videos. Inspired by Fisher's discriminant
criteria, it selects summary by optimizing the combination of three terms (a)
inner-summary, (b) inner-group, and (c) between-group variances defined on the
feature representation of summary, which can simply represent (A)-(C).
Moreover, we developed a novel dataset to investigate how well the generated
summary reflects the underlying $\textit{viewpoint}$. Quantitative and
qualitative experiments conducted on the dataset demonstrate the effectiveness
of proposed method. | http://arxiv.org/abs/1804.02843v2 | http://arxiv.org/pdf/1804.02843v2.pdf | CVPR 2018 6 | [
"Atsushi Kanehira",
"Luc van Gool",
"Yoshitaka Ushiku",
"Tatsuya Harada"
] | [
"Semantic Similarity",
"Semantic Textual Similarity",
"Video Summarization"
] | 1,523,232,000,000 | [] | 57,347 |
315,897 | https://paperswithcode.com/paper/learning-based-and-unrolled-motion | 2209.03671 | Learning-based and unrolled motion-compensated reconstruction for cardiac MR CINE imaging | Motion-compensated MR reconstruction (MCMR) is a powerful concept with considerable potential, consisting of two coupled sub-problems: Motion estimation, assuming a known image, and image reconstruction, assuming known motion. In this work, we propose a learning-based self-supervised framework for MCMR, to efficiently deal with non-rigid motion corruption in cardiac MR imaging. Contrary to conventional MCMR methods in which the motion is estimated prior to reconstruction and remains unchanged during the iterative optimization process, we introduce a dynamic motion estimation process and embed it into the unrolled optimization. We establish a cardiac motion estimation network that leverages temporal information via a group-wise registration approach, and carry out a joint optimization between the motion estimation and reconstruction. Experiments on 40 acquired 2D cardiac MR CINE datasets demonstrate that the proposed unrolled MCMR framework can reconstruct high quality MR images at high acceleration rates where other state-of-the-art methods fail. We also show that the joint optimization mechanism is mutually beneficial for both sub-tasks, i.e., motion estimation and image reconstruction, especially when the MR image is highly undersampled. | https://arxiv.org/abs/2209.03671v1 | https://arxiv.org/pdf/2209.03671v1.pdf | null | [
"Jiazhen Pan",
"Daniel Rueckert",
"Thomas Küstner",
"Kerstin Hammernik"
] | [
"Image Reconstruction",
"Motion Estimation"
] | 1,662,595,200,000 | [] | 171,794 |
43,792 | https://paperswithcode.com/paper/bayesian-cp-factorization-of-incomplete | 1401.6497 | Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination | CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful
technique for tensor completion through explicitly capturing the multilinear
latent factors. The existing CP algorithms require the tensor rank to be
manually specified, however, the determination of tensor rank remains a
challenging problem especially for CP rank. In addition, existing approaches do
not take into account uncertainty information of latent factors, as well as
missing entries. To address these issues, we formulate CP factorization using a
hierarchical probabilistic model and employ a fully Bayesian treatment by
incorporating a sparsity-inducing prior over multiple latent factors and the
appropriate hyperpriors over all hyperparameters, resulting in automatic rank
determination. To learn the model, we develop an efficient deterministic
Bayesian inference algorithm, which scales linearly with data size. Our method
is characterized as a tuning parameter-free approach, which can effectively
infer underlying multilinear factors with a low-rank constraint, while also
providing predictive distributions over missing entries. Extensive simulations
on synthetic data illustrate the intrinsic capability of our method to recover
the ground-truth of CP rank and prevent the overfitting problem, even when a
large amount of entries are missing. Moreover, the results from real-world
applications, including image inpainting and facial image synthesis,
demonstrate that our method outperforms state-of-the-art approaches for both
tensor factorization and tensor completion in terms of predictive performance. | http://arxiv.org/abs/1401.6497v2 | http://arxiv.org/pdf/1401.6497v2.pdf | null | [
"Qibin Zhao",
"Liqing Zhang",
"Andrzej Cichocki"
] | [
"Bayesian Inference",
"Image Generation",
"Image Inpainting"
] | 1,390,608,000,000 | [] | 96,747 |
74,411 | https://paperswithcode.com/paper/execution-guided-neural-program-synthesis | null | Execution-Guided Neural Program Synthesis | Neural program synthesis from input-output examples has attracted an increasing interest from both the machine learning and the programming language community. Most existing neural program synthesis approaches employ an encoder-decoder architecture, which uses an encoder to compute the embedding of the given input-output examples, as well as a decoder to generate the program from the embedding following a given syntax. Although such approaches achieve a reasonable performance on simple tasks such as FlashFill, on more complex tasks such as Karel, the state-of-the-art approach can only achieve an accuracy of around 77%. We observe that the main drawback of existing approaches is that the semantic information is greatly under-utilized. In this work, we propose two simple yet principled techniques to better leverage the semantic information, which are execution-guided synthesis and synthesizer ensemble. These techniques are general enough to be combined with any existing encoder-decoder-style neural program synthesizer. Applying our techniques to the Karel dataset, we can boost the accuracy from around 77% to more than 90%. | https://openreview.net/forum?id=H1gfOiAqYm | https://openreview.net/pdf?id=H1gfOiAqYm | ICLR 2019 5 | [
"Xinyun Chen",
"Chang Liu",
"Dawn Song"
] | [
"Program Synthesis"
] | 1,556,668,800,000 | [] | 151,092 |
279,548 | https://paperswithcode.com/paper/fine-grained-tls-services-classification-with | 2202.11984 | Fine-grained TLS Services Classification with Reject Option | The recent success and proliferation of machine learning and deep learning have provided powerful tools, which are also utilized for encrypted traffic analysis, classification, and threat detection. These methods, neural networks in particular, are often complex and require a huge corpus of training data. Therefore, this paper focuses on collecting a large up-to-date dataset with almost 200 fine-grained service labels and 140 million network flows extended with packet-level metadata. The number of flows is three orders of magnitude higher than in other existing public labeled datasets of encrypted traffic. The number of service labels, which is important to make the problem hard and realistic, is four times higher than in the public dataset with the most class labels. The published dataset is intended as a benchmark for identifying services in encrypted traffic. Service identification can be further extended with the task of "rejecting" unknown services, i.e., the traffic not seen during the training phase. Neural networks offer superior performance for tackling this more challenging problem. To showcase the dataset's usefulness, we implemented a neural network with a multi-modal architecture, which is the state-of-the-art approach, and achieved 97.04% classification accuracy and detected 91.94% of unknown services with 5% false positive rate. | https://arxiv.org/abs/2202.11984v1 | https://arxiv.org/pdf/2202.11984v1.pdf | null | [
"Jan Luxemburk",
"Tomáš Čejka"
] | [
"Classification"
] | 1,645,660,800,000 | [] | 37,225 |
314,701 | https://paperswithcode.com/paper/few-shot-learning-for-clinical-natural | 2208.14923 | Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks | Clinical Natural Language Processing (NLP) has become an emerging technology in healthcare that leverages a large amount of free-text data in electronic health records (EHRs) to improve patient care, support clinical decisions, and facilitate clinical and translational science research. Deep learning has achieved state-of-the-art performance in many clinical NLP tasks. However, training deep learning models usually require large annotated datasets, which are normally not publicly available and can be time-consuming to build in clinical domains. Working with smaller annotated datasets is typical in clinical NLP and therefore, ensuring that deep learning models perform well is crucial for the models to be used in real-world applications. A widely adopted approach is fine-tuning existing Pre-trained Language Models (PLMs), but these attempts fall short when the training dataset contains only a few annotated samples. Few-Shot Learning (FSL) has recently been investigated to tackle this problem. Siamese Neural Network (SNN) has been widely utilized as an FSL approach in computer vision, but has not been studied well in NLP. Furthermore, the literature on its applications in clinical domains is scarce. In this paper, we propose two SNN-based FSL approaches for clinical NLP, including pre-trained SNN (PT-SNN) and SNN with second-order embeddings (SOE-SNN). We evaluated the proposed approaches on two clinical tasks, namely clinical text classification and clinical named entity recognition. We tested three few-shot settings including 4-shot, 8-shot, and 16-shot learning. Both clinical NLP tasks were benchmarked using three PLMs, including BERT, BioBERT, and BioClinicalBERT. The experimental results verified the effectiveness of the proposed SNN-based FSL approaches in both clinical NLP tasks. | https://arxiv.org/abs/2208.14923v1 | https://arxiv.org/pdf/2208.14923v1.pdf | null | [
"David Oniani",
"Sonish Sivarajkumar",
"Yanshan Wang"
] | [
"Few-Shot Learning",
"Named Entity Recognition",
"Named Entity Recognition",
"Text Classification",
"Text Classification"
] | 1,661,904,000,000 | [
{
"code_snippet_url": "",
"description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)",
"full_name": "WordPiece",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "WordPiece",
"source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"source_url": "http://arxiv.org/abs/1609.08144v2"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584",
"description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.",
"full_name": "Gaussian Error Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GELU",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v4"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L507",
"description": "**Scaled Exponential Linear Units**, or **SELUs**, are activation functions that induce self-normalizing properties.\r\n\r\nThe SELU activation function is given by \r\n\r\n$$f\\left(x\\right) = \\lambda{x} \\text{ if } x \\geq{0}$$\r\n$$f\\left(x\\right) = \\lambda{\\alpha\\left(\\exp\\left(x\\right) -1 \\right)} \\text{ if } x < 0 $$\r\n\r\nwith $\\alpha \\approx 1.6733$ and $\\lambda \\approx 1.0507$.",
"full_name": "Scaled Exponential Linear Unit",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "SELU",
"source_title": "Self-Normalizing Neural Networks",
"source_url": "http://arxiv.org/abs/1706.02515v5"
},
{
"code_snippet_url": "",
"description": "**Self-normalizing neural networks** (**SNNs**) are a type of neural architecture that aim to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are “scaled exponential linear units” (SELUs), which induce self-normalizing properties. Using the Banach fixed point theorem, it's possible to prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance — even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization schemes, and (3) to make learning highly robust.",
"full_name": "Self-Normalizing Neural Networks",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Consists of tabular data learning approaches that use deep learning architectures for learning on tabular data. According to the taxonomy in [V.Borisov et al. (2021)](https://paperswithcode.com/paper/deep-neural-networks-and-tabular-data-a), deep learning approaches for tabular data can be categorized into:\r\n\r\n- **Regularization models**\r\n- **Transformer-based models**: [TabNet](/method/tabnet), [TabTransformer](/method/tabtransformer), [SAINT](/method/saint), [ARM-Net](/method/arm-net),...\r\n- **Hybrid models** (fully differentiable and partly differentiable): [Wide&Deep](/method/wide-deep), [TabNN](/method/tabnn), [NON](/method/non), [Boost-GNN](/method/boost-gnn), [NODE](/method/node),...\r\n- **Data encoding methods** (single-dimensional encoding and multi-dimensional encoding): [VIME](/method/vime), [SCARF](/method/scarf),...",
"name": "Deep Tabular Learning",
"parent": null
},
"name": "SNN",
"source_title": "Self-Normalizing Neural Networks",
"source_url": "http://arxiv.org/abs/1706.02515v5"
}
] | 157,486 |
289,297 | https://paperswithcode.com/paper/joint-multi-view-unsupervised-feature | 2204.08247 | Joint Multi-view Unsupervised Feature Selection and Graph Learning | Despite the recent progress, the existing multi-view unsupervised feature selection methods mostly suffer from two limitations. First, they generally utilize either cluster structure or similarity structure to guide the feature selection, neglecting the possibility of a joint formulation with mutual benefits. Second, they often learn the similarity structure by either global structure learning or local structure learning, lacking the capability of graph learning with both global and local structural awareness. In light of this, this paper presents a joint multi-view unsupervised feature selection and graph learning (JMVFG) approach. Particularly, we formulate the multi-view feature selection with orthogonal decomposition, where each target matrix is decomposed into a view-specific basis matrix and a view-consistent cluster indicator. Cross-space locality preservation is incorporated to bridge the cluster structure learning in the projected space and the similarity learning (i.e., graph learning) in the original space. Further, a unified objective function is presented to enable the simultaneous learning of the cluster structure, the global and local similarity structures, and the multi-view consistency and inconsistency, upon which an alternating optimization algorithm is developed with theoretically proved convergence. Extensive experiments demonstrate the superiority of our approach for both multi-view feature selection and graph learning tasks. | https://arxiv.org/abs/2204.08247v1 | https://arxiv.org/pdf/2204.08247v1.pdf | null | [
"Si-Guo Fang",
"Dong Huang",
"Chang-Dong Wang",
"Yong Tang"
] | [
"Graph Learning"
] | 1,650,240,000,000 | [] | 7,643 |
12,445 | https://paperswithcode.com/paper/improved-inception-residual-convolutional | 1712.09888 | Improved Inception-Residual Convolutional Neural Network for Object Recognition | Machine learning and computer vision have driven many of the greatest
advances in the modeling of Deep Convolutional Neural Networks (DCNNs).
Nowadays, most of the research has been focused on improving recognition
accuracy with better DCNN models and learning approaches. The recurrent
convolutional approach is not applied very much, other than in a few DCNN
architectures. On the other hand, Inception-v4 and Residual networks have
promptly become popular among computer the vision community. In this paper, we
introduce a new DCNN model called the Inception Recurrent Residual
Convolutional Neural Network (IRRCNN), which utilizes the power of the
Recurrent Convolutional Neural Network (RCNN), the Inception network, and the
Residual network. This approach improves the recognition accuracy of the
Inception-residual network with same number of network parameters. In addition,
this proposed architecture generalizes the Inception network, the RCNN, and the
Residual network with significantly improved training accuracy. We have
empirically evaluated the performance of the IRRCNN model on different
benchmarks including CIFAR-10, CIFAR-100, TinyImageNet-200, and CU3D-100. The
experimental results show higher recognition accuracy against most of the
popular DCNN models including the RCNN. We have also investigated the
performance of the IRRCNN approach against the Equivalent Inception Network
(EIN) and the Equivalent Inception Residual Network (EIRN) counterpart on the
CIFAR-100 dataset. We report around 4.53%, 4.49% and 3.56% improvement in
classification accuracy compared with the RCNN, EIN, and EIRN on the CIFAR-100
dataset respectively. Furthermore, the experiment has been conducted on the
TinyImageNet-200 and CU3D-100 datasets where the IRRCNN provides better testing
accuracy compared to the Inception Recurrent CNN (IRCNN), the EIN, and the
EIRN. | http://arxiv.org/abs/1712.09888v1 | http://arxiv.org/pdf/1712.09888v1.pdf | null | [
"Md Zahangir Alom",
"Mahmudul Hasan",
"Chris Yakopcic",
"Tarek M. Taha",
"Vijayan K. Asari"
] | [
"Object Recognition"
] | 1,514,419,200,000 | [
{
"code_snippet_url": "",
"description": "Diffusion-convolutional neural networks (DCNN) is a model for graph-structured data. Through the introduction of a diffusion-convolution operation, diffusion-based representations can be learned from graph structured data and used as an effective basis for node classification.\r\n\r\nDescription and image from: [Diffusion-Convolutional Neural Networks](https://arxiv.org/pdf/1511.02136.pdf)",
"full_name": "Diffusion-Convolutional Neural Networks",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "DCNN",
"source_title": "Diffusion-Convolutional Neural Networks",
"source_url": "http://arxiv.org/abs/1511.02136v6"
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://www.healthnutra.org/es/maxup/",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/kentsommer/keras-inceptionV4/blob/ef1db6f09b6511779c05fab47d374741bc89b5ee/inception_v4.py#L156",
"description": "**Inception-C** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.",
"full_name": "Inception-C",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Inception-C",
"source_title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"source_url": "http://arxiv.org/abs/1602.07261v2"
},
{
"code_snippet_url": "https://github.com/kentsommer/keras-inceptionV4/blob/ef1db6f09b6511779c05fab47d374741bc89b5ee/inception_v4.py#L111",
"description": "**Inception-B** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.",
"full_name": "Inception-B",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Inception-B",
"source_title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"source_url": "http://arxiv.org/abs/1602.07261v2"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/kentsommer/keras-inceptionV4/blob/ef1db6f09b6511779c05fab47d374741bc89b5ee/inception_v4.py#L71",
"description": "**Inception-A** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.",
"full_name": "Inception-A",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Inception-A",
"source_title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"source_url": "http://arxiv.org/abs/1602.07261v2"
},
{
"code_snippet_url": "https://github.com/Cadene/pretrained-models.pytorch/blob/8aae3d8f1135b6b13fed79c1d431e3449fdbf6e0/pretrainedmodels/models/inceptionresnetv2.py#L120",
"description": "**Reduction-A** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.",
"full_name": "Reduction-A",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Reduction-A",
"source_title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"source_url": "http://arxiv.org/abs/1602.07261v2"
},
{
"code_snippet_url": "https://github.com/kentsommer/keras-inceptionV4/blob/ef1db6f09b6511779c05fab47d374741bc89b5ee/inception_v4.py#L136",
"description": "**Reduction-B** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.",
"full_name": "Reduction-B",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Reduction-B",
"source_title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"source_url": "http://arxiv.org/abs/1602.07261v2"
},
{
"code_snippet_url": "https://github.com/kentsommer/keras-inceptionV4/blob/ef1db6f09b6511779c05fab47d374741bc89b5ee/inception_v4.py#L242",
"description": "**Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3).",
"full_name": "Inception-v4",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Inception-v4",
"source_title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"source_url": "http://arxiv.org/abs/1602.07261v2"
}
] | 26,415 |
239,765 | https://paperswithcode.com/paper/dro-a-data-scarce-mechanism-to-revolutionize | 2109.05470 | DRo: A data-scarce mechanism to revolutionize the performance of Deep Learning based Security Systems | Supervised Deep Learning requires plenty of labeled data to converge, and hence perform optimally for task-specific learning. Therefore, we propose a novel mechanism named DRo (for Deep Routing) for data-scarce domains like security. The DRo approach builds upon some of the recent developments in Deep-Clustering. In particular, it exploits the self-augmented training mechanism using synthetically generated local perturbations. DRo not only allays the challenges with sparse-labeled data but also offers many unique advantages. We also developed a system named DRoID that uses the DRo mechanism for enhancing the performance of an existing Malware Detection System that uses (low information features like the) Android implicit Intent(s) as the only features. We conduct experiments on DRoID using a popular and standardized Android malware dataset and found that the DRo mechanism could successfully reduce the false-alarms generated by the downstream classifier by 67.9%, and also simultaneously boosts its accuracy by 11.3%. This is significant not only because the gains achieved are unparalleled but also because the features used were never considered rich enough to train a classifier on; and hence no decent performance could ever be reported by any malware classification system till-date using these features in isolation. Owing to the results achieved, the DRo mechanism claims a dominant position amongst all known systems that aims to enhance the classification performance of deep learning models with sparse-labeled data. | https://arxiv.org/abs/2109.05470v1 | https://arxiv.org/pdf/2109.05470v1.pdf | null | [
"Mohit Sewak",
"Sanjay K. Sahay",
"Hemant Rathore"
] | [
"Deep Clustering",
"Malware Classification",
"Malware Detection"
] | 1,631,404,800,000 | [] | 154,614 |
281,336 | https://paperswithcode.com/paper/highly-accurate-dichotomous-image | 2203.03041 | Highly Accurate Dichotomous Image Segmentation | We present a systematic study on a new task called dichotomous image segmentation (DIS) , which aims to segment highly accurate objects from natural images. To this end, we collected the first large-scale DIS dataset, called DIS5K, which contains 5,470 high-resolution (e.g., 2K, 4K or larger) images covering camouflaged, salient, or meticulous objects in various backgrounds. DIS is annotated with extremely fine-grained labels. Besides, we introduce a simple intermediate supervision baseline (IS-Net) using both feature-level and mask-level guidance for DIS model training. IS-Net outperforms various cutting-edge baselines on the proposed DIS5K, making it a general self-learned supervision network that can facilitate future research in DIS. Further, we design a new metric called human correction efforts (HCE) which approximates the number of mouse clicking operations required to correct the false positives and false negatives. HCE is utilized to measure the gap between models and real-world applications and thus can complement existing metrics. Finally, we conduct the largest-scale benchmark, evaluating 16 representative segmentation models, providing a more insightful discussion regarding object complexities, and showing several potential applications (e.g., background removal, art design, 3D reconstruction). Hoping these efforts can open up promising directions for both academic and industries. Project page: https://xuebinqin.github.io/dis/index.html. | https://arxiv.org/abs/2203.03041v4 | https://arxiv.org/pdf/2203.03041v4.pdf | null | [
"Xuebin Qin",
"Hang Dai",
"Xiaobin Hu",
"Deng-Ping Fan",
"Ling Shao",
"and Luc Van Gool"
] | [
"3D Reconstruction",
"Image Segmentation",
"Semantic Segmentation"
] | 1,646,524,800,000 | [] | 189,432 |
101,972 | https://paperswithcode.com/paper/a-research-and-strategy-of-remote-sensing | 1905.10236 | A Research and Strategy of Remote Sensing Image Denoising Algorithms | Most raw data download from satellites are useless, resulting in transmission waste, one solution is to process data directly on satellites, then only transmit the processed results to the ground. Image processing is the main data processing on satellites, in this paper, we focus on image denoising which is the basic image processing. There are many high-performance denoising approaches at present, however, most of them rely on advanced computing resources or rich images on the ground. Considering the limited computing resources of satellites and the characteristics of remote sensing images, we do some research on these high-performance ground image denoising approaches and compare them in simulation experiments to analyze whether they are suitable for satellites. According to the analysis results, we propose two feasible image denoising strategies for satellites based on satellite TianZhi-1. | https://arxiv.org/abs/1905.10236v1 | https://arxiv.org/pdf/1905.10236v1.pdf | null | [
"Ling Li",
"Junxing Hu",
"Fengge Wu",
"Junsuo Zhao"
] | [
"Denoising",
"Image Denoising"
] | 1,558,656,000,000 | [] | 29,110 |
211,371 | https://paperswithcode.com/paper/a-task-motion-planning-framework-using | 2104.01549 | A Task-Motion Planning Framework Using Iteratively Deepened AND/OR Graph Networks | We present an approach for Task-Motion Planning (TMP) using Iterative Deepened AND/OR Graph Networks (TMP-IDAN) that uses an AND/OR graph network based novel abstraction for compactly representing the task-level states and actions. While retrieving a target object from clutter, the number of object re-arrangements required to grasp the target is not known ahead of time. To address this challenge, in contrast to traditional AND/OR graph-based planners, we grow the AND/OR graph online until the target grasp is feasible and thereby obtain a network of AND/OR graphs. The AND/OR graph network allows faster computations than traditional task planners. We validate our approach and evaluate its capabilities using a Baxter robot and a state-of-the-art robotics simulator in several challenging non-trivial cluttered table-top scenarios. The experiments show that our approach is readily scalable to increasing number of objects and different degrees of clutter. | https://arxiv.org/abs/2104.01549v1 | https://arxiv.org/pdf/2104.01549v1.pdf | null | [
"Hossein Karami",
"Antony Thomas",
"Fulvio Mastrogiovanni"
] | [
"Motion Planning"
] | 1,617,494,400,000 | [] | 181,388 |
106,814 | https://paperswithcode.com/paper/3d-geometric-salient-patterns-analysis-on-3d | 1906.07645 | 3D Geometric salient patterns analysis on 3D meshes | Pattern analysis is a wide domain that has wide applicability in many fields. In fact, texture analysis is one of those fields, since the texture is defined as a set of repetitive or quasi-repetitive patterns. Despite its importance in analyzing 3D meshes, geometric texture analysis is less studied by geometry processing community. This paper presents a new efficient approach for geometric texture analysis on 3D triangular meshes. The proposed method is a scale-aware approach that takes as input a 3D mesh and a user-scale. It provides, as a result, a similarity-based clustering of texels in meaningful classes. Experimental results of the proposed algorithm are presented for both real-world and synthetic meshes within various textures. Furthermore, the efficiency of the proposed approach was experimentally demonstrated under mesh simplification and noise addition on the mesh surface. In this paper, we present a practical application for semantic annotation of 3D geometric salient texels. | https://arxiv.org/abs/1906.07645v1 | https://arxiv.org/pdf/1906.07645v1.pdf | null | [
"Alice Othmani",
"Fakhri Torkhani",
"Jean-Marie Favreau"
] | [
"Texture Classification"
] | 1,560,816,000,000 | [] | 64,579 |
2,946 | https://paperswithcode.com/paper/sparse-binary-compression-towards-distributed | 1805.08768 | Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication | Currently, progressively larger deep neural networks are trained on ever
growing data corpora. As this trend is only going to increase in the future,
distributed training schemes are becoming increasingly relevant. A major issue
in distributed training is the limited communication bandwidth between
contributing nodes or prohibitive communication cost in general. These
challenges become even more pressing, as the number of computation nodes
increases. To counteract this development we propose sparse binary compression
(SBC), a compression framework that allows for a drastic reduction of
communication cost for distributed training. SBC combines existing techniques
of communication delay and gradient sparsification with a novel binarization
method and optimal weight update encoding to push compression gains to new
limits. By doing so, our method also allows us to smoothly trade-off gradient
sparsity and temporal sparsity to adapt to the requirements of the learning
task. Our experiments show, that SBC can reduce the upstream communication on a
variety of convolutional and recurrent neural network architectures by more
than four orders of magnitude without significantly harming the convergence
speed in terms of forward-backward passes. For instance, we can train ResNet50
on ImageNet in the same number of iterations to the baseline accuracy, using
$\times 3531$ less bits or train it to a $1\%$ lower accuracy using $\times
37208$ less bits. In the latter case, the total upstream communication required
is cut from 125 terabytes to 3.35 gigabytes for every participating client. | http://arxiv.org/abs/1805.08768v1 | http://arxiv.org/pdf/1805.08768v1.pdf | null | [
"Felix Sattler",
"Simon Wiedemann",
"Klaus-Robert Müller",
"Wojciech Samek"
] | [
"Binarization"
] | 1,526,947,200,000 | [
{
"code_snippet_url": "",
"description": "**Gradient Sparsification** is a technique for distributed training that sparsifies stochastic gradients to reduce the communication cost, with minor increase in the number of iterations. The key idea behind our sparsification technique is to drop some coordinates of the stochastic gradient and appropriately amplify the remaining coordinates to ensure the unbiasedness of the sparsified stochastic gradient. The sparsification approach can significantly reduce the coding length of the stochastic gradient and only slightly increase the variance of the stochastic gradient.",
"full_name": "Gradient Sparsification",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "This section contains a compilation of distributed methods for scaling deep learning to very large models. There are many different strategies for scaling training across multiple devices, including:\r\n\r\n - [Data Parallel](https://paperswithcode.com/methods/category/data-parallel-methods) : for each node we use the same model parameters to do forward propagation, but we send a small batch of different data to each node, compute the gradient normally, and send it back to the main node. Once we have all the gradients, we calculate the weighted average and use this to update the model parameters.\r\n\r\n - [Model Parallel](https://paperswithcode.com/methods/category/model-parallel-methods) : for each node we assign different layers to it. During forward propagation, we start in the node with the first layers, then move onto the next, and so on. Once forward propagation is done we calculate gradients for the last node, and update model parameters for that node. Then we backpropagate onto the penultimate node, update the parameters, and so on.\r\n\r\n - Additional methods including [Hybrid Parallel](https://paperswithcode.com/methods/category/hybrid-parallel-methods), [Auto Parallel](https://paperswithcode.com/methods/category/auto-parallel-methods), and [Distributed Communication](https://paperswithcode.com/methods/category/distributed-communication).\r\n\r\nImage credit: [Jordi Torres](https://towardsdatascience.com/scalable-deep-learning-on-parallel-and-distributed-infrastructures-e5fb4a956bef).",
"name": "Distributed Methods",
"parent": null
},
"name": "Gradient Sparsification",
"source_title": "Gradient Sparsification for Communication-Efficient Distributed Optimization",
"source_url": "http://arxiv.org/abs/1710.09854v1"
}
] | 80,635 |
307,286 | https://paperswithcode.com/paper/every-preference-changes-differently-neural | 2207.06652 | Every Preference Changes Differently: Neural Multi-Interest Preference Model with Temporal Dynamics for Recommendation | User embeddings (vectorized representations of a user) are essential in recommendation systems. Numerous approaches have been proposed to construct a representation for the user in order to find similar items for retrieval tasks, and they have been proven effective in industrial recommendation systems as well. Recently people have discovered the power of using multiple embeddings to represent a user, with the hope that each embedding represents the user's interest in a certain topic. With multi-interest representation, it's important to model the user's preference over the different topics and how the preference change with time. However, existing approaches either fail to estimate the user's affinity to each interest or unreasonably assume every interest of every user fades with an equal rate with time, thus hurting the recall of candidate retrieval. In this paper, we propose the Multi-Interest Preference (MIP) model, an approach that not only produces multi-interest for users by using the user's sequential engagement more effectively but also automatically learns a set of weights to represent the preference over each embedding so that the candidates can be retrieved from each interest proportionally. Extensive experiments have been done on various industrial-scale datasets to demonstrate the effectiveness of our approach. | https://arxiv.org/abs/2207.06652v2 | https://arxiv.org/pdf/2207.06652v2.pdf | null | [
"Hui Shi",
"Yupeng Gu",
"Yitong Zhou",
"Bo Zhao",
"Sicun Gao",
"Jishen Zhao"
] | [
"Recommendation Systems"
] | 1,657,756,800,000 | [] | 3,386 |
213,595 | https://paperswithcode.com/paper/partition-of-unity-methods-for-signal | 2012.10636 | Partition of Unity Methods for Signal Processing on Graphs | Partition of unity methods (PUMs) on graphs are simple and highly adaptive auxiliary tools for graph signal processing. Based on a greedy-type metric clustering and augmentation scheme, we show how a partition of unity can be generated in an efficient way on graphs. We investigate how PUMs can be combined with a local graph basis function (GBF) approximation method in order to obtain low-cost global interpolation or classification schemes. From a theoretical point of view, we study necessary prerequisites for the partition of unity such that global error estimates of the PUM follow from corresponding local ones. Finally, properties of the PUM as cost-efficiency and approximation accuracy are investigated numerically. | https://arxiv.org/abs/2012.10636v1 | https://arxiv.org/pdf/2012.10636v1.pdf | null | [
"Roberto Cavoretto",
"Alessandra De Rossi",
"Wolfgang Erb"
] | [
"Unity"
] | 1,608,336,000,000 | [] | 125,959 |
273,466 | https://paperswithcode.com/paper/are-your-sensitive-attributes-private-novel | 2201.09370 | Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models | Increasing use of machine learning (ML) technologies in privacy-sensitive domains such as medical diagnoses, lifestyle predictions, and business decisions highlights the need to better understand if these ML technologies are introducing leakage of sensitive and proprietary training data. In this paper, we focus on model inversion attacks where the adversary knows non-sensitive attributes about records in the training data and aims to infer the value of a sensitive attribute unknown to the adversary, using only black-box access to the target classification model. We first devise a novel confidence score-based model inversion attribute inference attack that significantly outperforms the state-of-the-art. We then introduce a label-only model inversion attack that relies only on the model's predicted labels but still matches our confidence score-based attack in terms of attack effectiveness. We also extend our attacks to the scenario where some of the other (non-sensitive) attributes of a target record are unknown to the adversary. We evaluate our attacks on two types of machine learning models, decision tree and deep neural network, trained on three real datasets. Moreover, we empirically demonstrate the disparate vulnerability of model inversion attacks, i.e., specific groups in the training dataset (grouped by gender, race, etc.) could be more vulnerable to model inversion attacks. | https://arxiv.org/abs/2201.09370v1 | https://arxiv.org/pdf/2201.09370v1.pdf | null | [
"Shagufta Mehnaz",
"Sayanton V. Dibbo",
"Ehsanul Kabir",
"Ninghui Li",
"Elisa Bertino"
] | [
"Inference Attack"
] | 1,642,896,000,000 | [] | 96,219 |
132,908 | https://paperswithcode.com/paper/rdfframes-knowledge-graph-access-for-machine | 2002.03614 | RDFFrames: Knowledge Graph Access for Machine Learning Tools | Knowledge graphs represented as RDF datasets are integral to many machine learning applications. RDF is supported by a rich ecosystem of data management systems and tools, most notably RDF database systems that provide a SPARQL query interface. Surprisingly, machine learning tools for knowledge graphs do not use SPARQL, despite the obvious advantages of using a database system. This is due to the mismatch between SPARQL and machine learning tools in terms of data model and programming style. Machine learning tools work on data in tabular format and process it using an imperative programming style, while SPARQL is declarative and has as its basic operation matching graph patterns to RDF triples. We posit that a good interface to knowledge graphs from a machine learning software stack should use an imperative, navigational programming paradigm based on graph traversal rather than the SPARQL query paradigm based on graph patterns. In this paper, we present RDFFrames, a framework that provides such an interface. RDFFrames provides an imperative Python API that gets internally translated to SPARQL, and it is integrated with the PyData machine learning software stack. RDFFrames enables the user to make a sequence of Python calls to define the data to be extracted from a knowledge graph stored in an RDF database system, and it translates these calls into a compact SPQARL query, executes it on the database system, and returns the results in a standard tabular format. Thus, RDFFrames is a useful tool for data preparation that combines the usability of PyData with the flexibility and performance of RDF database systems. | https://arxiv.org/abs/2002.03614v4 | https://arxiv.org/pdf/2002.03614v4.pdf | null | [
"Aisha Mohamed",
"Ghadeer Abuoda",
"Abdurrahman Ghanem",
"Zoi Kaoudi",
"Ashraf Aboulnaga"
] | [
"Knowledge Graphs"
] | 1,581,292,800,000 | [] | 185,838 |
29,111 | https://paperswithcode.com/paper/flood-filling-networks | 1611.00421 | Flood-Filling Networks | State-of-the-art image segmentation algorithms generally consist of at least
two successive and distinct computations: a boundary detection process that
uses local image information to classify image locations as boundaries between
objects, followed by a pixel grouping step such as watershed or connected
components that clusters pixels into segments. Prior work has varied the
complexity and approach employed in these two steps, including the
incorporation of multi-layer neural networks to perform boundary prediction,
and the use of global optimizations during pixel clustering. We propose a
unified and end-to-end trainable machine learning approach, flood-filling
networks, in which a recurrent 3d convolutional network directly produces
individual segments from a raw image. The proposed approach robustly segments
images with an unknown and variable number of objects as well as highly
variable object sizes. We demonstrate the approach on a challenging 3d image
segmentation task, connectomic reconstruction from volume electron microscopy
data, on which flood-filling neural networks substantially improve accuracy
over other state-of-the-art methods. The proposed approach can replace complex
multi-step segmentation pipelines with a single neural network that is learned
end-to-end. | http://arxiv.org/abs/1611.00421v1 | http://arxiv.org/pdf/1611.00421v1.pdf | null | [
"Michał Januszewski",
"Jeremy Maitin-Shepard",
"Peter Li",
"Jörgen Kornfeld",
"Winfried Denk",
"Viren Jain"
] | [
"Boundary Detection",
"Image Segmentation",
"Semantic Segmentation"
] | 1,477,958,400,000 | [] | 168,532 |
240,693 | https://paperswithcode.com/paper/breaking-the-corpus-bottleneck-for-context | null | Breaking the Corpus Bottleneck for Context-Aware Neural Machine Translation with Cross-Task Pre-training | Context-aware neural machine translation (NMT) remains challenging due to the lack of large-scale document-level parallel corpora. To break the corpus bottleneck, in this paper we aim to improve context-aware NMT by taking the advantage of the availability of both large-scale sentence-level parallel dataset and source-side monolingual documents. To this end, we propose two pre-training tasks. One learns to translate a sentence from source language to target language on the sentence-level parallel dataset while the other learns to translate a document from deliberately noised to original on the monolingual documents. Importantly, the two pre-training tasks are jointly and simultaneously learned via the same model, thereafter fine-tuned on scale-limited parallel documents from both sentence-level and document-level perspectives. Experimental results on four translation tasks show that our approach significantly improves translation performance. One nice property of our approach is that the fine-tuned model can be used to translate both sentences and documents. | https://aclanthology.org/2021.acl-long.222 | https://aclanthology.org/2021.acl-long.222.pdf | ACL 2021 5 | [
"Linqing Chen",
"Junhui Li",
"ZhengXian Gong",
"Boxing Chen",
"Weihua Luo",
"Min Zhang",
"Guodong Zhou"
] | [
"Machine Translation"
] | 1,627,776,000,000 | [] | 25,239 |
157,032 | https://paperswithcode.com/paper/sadet-learning-an-efficient-and-accurate | 2007.13119 | SADet: Learning An Efficient and Accurate Pedestrian Detector | Although the anchor-based detectors have taken a big step forward in pedestrian detection, the overall performance of algorithm still needs further improvement for practical applications, \emph{e.g.}, a good trade-off between the accuracy and efficiency. To this end, this paper proposes a series of systematic optimization strategies for the detection pipeline of one-stage detector, forming a single shot anchor-based detector (SADet) for efficient and accurate pedestrian detection, which includes three main improvements. Firstly, we optimize the sample generation process by assigning soft tags to the outlier samples to generate semi-positive samples with continuous tag value between $0$ and $1$, which not only produces more valid samples, but also strengthens the robustness of the model. Secondly, a novel Center-$IoU$ loss is applied as a new regression loss for bounding box regression, which not only retains the good characteristics of IoU loss, but also solves some defects of it. Thirdly, we also design Cosine-NMS for the postprocess of predicted bounding boxes, and further propose adaptive anchor matching to enable the model to adaptively match the anchor boxes to full or visible bounding boxes according to the degree of occlusion, making the NMS and anchor matching algorithms more suitable for occluded pedestrian detection. Though structurally simple, it presents state-of-the-art result and real-time speed of $20$ FPS for VGA-resolution images ($640 \times 480$) on challenging pedestrian detection benchmarks, i.e., CityPersons, Caltech, and human detection benchmark CrowdHuman, leading to a new attractive pedestrian detector. | https://arxiv.org/abs/2007.13119v1 | https://arxiv.org/pdf/2007.13119v1.pdf | null | [
"Chubin Zhuang",
"Zhen Lei",
"Stan Z. Li"
] | [
"Human Detection",
"Pedestrian Detection",
"TAG"
] | 1,595,721,600,000 | [] | 156,944 |
139,415 | https://paperswithcode.com/paper/sindy-pi-a-robust-algorithm-for-parallel | 2004.02322 | SINDy-PI: A Robust Algorithm for Parallel Implicit Sparse Identification of Nonlinear Dynamics | Accurately modeling the nonlinear dynamics of a system from measurement data is a challenging yet vital topic. The sparse identification of nonlinear dynamics (SINDy) algorithm is one approach to discover dynamical systems models from data. Although extensions have been developed to identify implicit dynamics, or dynamics described by rational functions, these extensions are extremely sensitive to noise. In this work, we develop SINDy-PI (parallel, implicit), a robust variant of the SINDy algorithm to identify implicit dynamics and rational nonlinearities. The SINDy-PI framework includes multiple optimization algorithms and a principled approach to model selection. We demonstrate the ability of this algorithm to learn implicit ordinary and partial differential equations and conservation laws from limited and noisy data. In particular, we show that the proposed approach is several orders of magnitude more noise robust than previous approaches, and may be used to identify a class of complex ODE and PDE dynamics that were previously unattainable with SINDy, including for the double pendulum dynamics and the Belousov Zhabotinsky (BZ) reaction. | https://arxiv.org/abs/2004.02322v2 | https://arxiv.org/pdf/2004.02322v2.pdf | null | [
"Kadierdan Kaheman",
"J. Nathan Kutz",
"Steven L. Brunton"
] | [
"Model Selection"
] | 1,586,044,800,000 | [] | 33,297 |
268,239 | https://paperswithcode.com/paper/formal-estimation-of-collision-risks-for | 2112.07187 | Formal Estimation of Collision Risks for Autonomous Vehicles: A Compositional Data-Driven Approach | In this work, we propose a compositional data-driven approach for the formal estimation of collision risks for autonomous vehicles (AVs) while acting in a stochastic multi-agent framework. The proposed approach is based on the construction of sub-barrier certificates for each stochastic agent via a set of data collected from its trajectories while providing an a-priori guaranteed confidence on the data-driven estimation. In our proposed setting, we first cast the original collision risk problem for each agent as a robust optimization program (ROP). Solving the acquired ROP is not tractable due to an unknown model that appears in one of its constraints. To tackle this difficulty, we collect finite numbers of data from trajectories of each agent and provide a scenario optimization program (SOP) corresponding to the original ROP. We then establish a probabilistic bridge between the optimal value of SOP and that of ROP, and accordingly, we formally construct the sub-barrier certificate for each unknown agent based on the number of data and a required level of confidence. We then propose a compositional technique based on small-gain reasoning to quantify the collision risk for multi-agent AVs with some desirable confidence based on sub-barrier certificates of individual agents constructed from data. For the case that the proposed compositionality conditions are not satisfied, we provide a relaxed version of compositional results without requiring any compositionality conditions but at the cost of providing a potentially conservative collision risk. Eventually, we also present our approaches for non-stochastic multi-agent AVs. We demonstrate the effectiveness of our proposed results by applying them to a vehicle platooning consisting of 100 vehicles with 1 leader and 99 followers. We formally estimate the collision risk by collecting data from trajectories of each agent. | https://arxiv.org/abs/2112.07187v2 | https://arxiv.org/pdf/2112.07187v2.pdf | null | [
"Abolfazl Lavaei",
"Luigi Di Lillo",
"Andrea Censi",
"Emilio Frazzoli"
] | [
"Autonomous Vehicles"
] | 1,639,440,000,000 | [] | 3,106 |
125,893 | https://paperswithcode.com/paper/indirect-local-attacks-for-context-aware | 1911.13038 | Indirect Local Attacks for Context-aware Semantic Segmentation Networks | Recently, deep networks have achieved impressive semantic segmentation performance, in particular thanks to their use of larger contextual information. In this paper, we show that the resulting networks are sensitive not only to global attacks, where perturbations affect the entire input image, but also to indirect local attacks where perturbations are confined to a small image region that does not overlap with the area that we aim to fool. To this end, we introduce several indirect attack strategies, including adaptive local attacks, aiming to find the best image location to perturb, and universal local attacks. Furthermore, we propose attack detection techniques both for the global image level and to obtain a pixel-wise localization of the fooled regions. Our results are unsettling: Because they exploit a larger context, more accurate semantic segmentation networks are more sensitive to indirect local attacks. | https://arxiv.org/abs/1911.13038v2 | https://arxiv.org/pdf/1911.13038v2.pdf | ECCV 2020 8 | [
"Krishna Kanth Nakka",
"Mathieu Salzmann"
] | [
"Semantic Segmentation"
] | 1,574,985,600,000 | [] | 46,835 |
77,697 | https://paperswithcode.com/paper/articulated-and-restricted-motion-subspaces | null | Articulated and Restricted Motion Subspaces and Their Signatures | Articulated objects represent an important class of objects in our everyday environment. Automatic detection of the type of articulated or otherwise restricted motion and extraction of the corresponding motion parameters are therefore of high value, e.g. in order to augment an otherwise static 3D reconstruction with dynamic semantics, such as rotation axes and allowable translation directions for certain rigid parts or objects. Hence, in this paper, a novel theory to analyse relative transformations between two motion-restricted parts will be presented. The analysis is based on linear subspaces spanned by relative transformations. Moreover, a signature for relative transformations will be introduced which uniquely specifies the type of restricted motion encoded in these relative transformations. This theoretic framework enables the derivation of novel algebraic constraints, such as low-rank constraints for subsequent rotations around two fixed axes for example. Lastly, given the type of restricted motion as predicted by the signature, the paper shows how to extract all the motion parameters with matrix manipulations from linear algebra. Our theory is verified on several real data sets, such as a rotating blackboard or a wheel rolling on the floor amongst others. | http://openaccess.thecvf.com/content_cvpr_2013/html/Jacquet_Articulated_and_Restricted_2013_CVPR_paper.html | http://openaccess.thecvf.com/content_cvpr_2013/papers/Jacquet_Articulated_and_Restricted_2013_CVPR_paper.pdf | CVPR 2013 6 | [
"Bastien Jacquet",
"Roland Angst",
"Marc Pollefeys"
] | [
"3D Reconstruction"
] | 1,370,044,800,000 | [] | 9,825 |
286,044 | https://paperswithcode.com/paper/sit-a-bionic-and-non-linear-neuron-for | 2203.16117 | SIT: A Bionic and Non-Linear Neuron for Spiking Neural Network | Spiking Neural Networks (SNNs) have piqued researchers' interest because of their capacity to process temporal information and low power consumption. However, current state-of-the-art methods limited their biological plausibility and performance because their neurons are generally built on the simple Leaky-Integrate-and-Fire (LIF) model. Due to the high level of dynamic complexity, modern neuron models have seldom been implemented in SNN practice. In this study, we adopt the Phase Plane Analysis (PPA) technique, a technique often utilized in neurodynamics field, to integrate a recent neuron model, namely, the Izhikevich neuron. Based on the findings in the advancement of neuroscience, the Izhikevich neuron model can be biologically plausible while maintaining comparable computational cost with LIF neurons. By utilizing the adopted PPA, we have accomplished putting neurons built with the modified Izhikevich model into SNN practice, dubbed as the Standardized Izhikevich Tonic (SIT) neuron. For performance, we evaluate the suggested technique for image classification tasks in self-built LIF-and-SIT-consisted SNNs, named Hybrid Neural Network (HNN) on static MNIST, Fashion-MNIST, CIFAR-10 datasets and neuromorphic N-MNIST, CIFAR10-DVS, and DVS128 Gesture datasets. The experimental results indicate that the suggested method achieves comparable accuracy while exhibiting more biologically realistic behaviors on nearly all test datasets, demonstrating the efficiency of this novel strategy in bridging the gap between neurodynamics and SNN practice. | https://arxiv.org/abs/2203.16117v2 | https://arxiv.org/pdf/2203.16117v2.pdf | null | [
"Cheng Jin",
"Rui-Jie Zhu",
"Xiao Wu",
"Liang-Jian Deng"
] | [
"Image Classification"
] | 1,648,598,400,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L507",
"description": "**Scaled Exponential Linear Units**, or **SELUs**, are activation functions that induce self-normalizing properties.\r\n\r\nThe SELU activation function is given by \r\n\r\n$$f\\left(x\\right) = \\lambda{x} \\text{ if } x \\geq{0}$$\r\n$$f\\left(x\\right) = \\lambda{\\alpha\\left(\\exp\\left(x\\right) -1 \\right)} \\text{ if } x < 0 $$\r\n\r\nwith $\\alpha \\approx 1.6733$ and $\\lambda \\approx 1.0507$.",
"full_name": "Scaled Exponential Linear Unit",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "SELU",
"source_title": "Self-Normalizing Neural Networks",
"source_url": "http://arxiv.org/abs/1706.02515v5"
},
{
"code_snippet_url": "",
"description": "**Self-normalizing neural networks** (**SNNs**) are a type of neural architecture that aim to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are “scaled exponential linear units” (SELUs), which induce self-normalizing properties. Using the Banach fixed point theorem, it's possible to prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance — even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization schemes, and (3) to make learning highly robust.",
"full_name": "Self-Normalizing Neural Networks",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Consists of tabular data learning approaches that use deep learning architectures for learning on tabular data. According to the taxonomy in [V.Borisov et al. (2021)](https://paperswithcode.com/paper/deep-neural-networks-and-tabular-data-a), deep learning approaches for tabular data can be categorized into:\r\n\r\n- **Regularization models**\r\n- **Transformer-based models**: [TabNet](/method/tabnet), [TabTransformer](/method/tabtransformer), [SAINT](/method/saint), [ARM-Net](/method/arm-net),...\r\n- **Hybrid models** (fully differentiable and partly differentiable): [Wide&Deep](/method/wide-deep), [TabNN](/method/tabnn), [NON](/method/non), [Boost-GNN](/method/boost-gnn), [NODE](/method/node),...\r\n- **Data encoding methods** (single-dimensional encoding and multi-dimensional encoding): [VIME](/method/vime), [SCARF](/method/scarf),...",
"name": "Deep Tabular Learning",
"parent": null
},
"name": "SNN",
"source_title": "Self-Normalizing Neural Networks",
"source_url": "http://arxiv.org/abs/1706.02515v5"
}
] | 116,656 |
307,647 | https://paperswithcode.com/paper/alexu-aic-at-arabic-hate-speech-2022-contrast | 2207.08557 | AlexU-AIC at Arabic Hate Speech 2022: Contrast to Classify | Online presence on social media platforms such as Facebook and Twitter has become a daily habit for internet users. Despite the vast amount of services the platforms offer for their users, users suffer from cyber-bullying, which further leads to mental abuse and may escalate to cause physical harm to individuals or targeted groups. In this paper, we present our submission to the Arabic Hate Speech 2022 Shared Task Workshop (OSACT5 2022) using the associated Arabic Twitter dataset. The shared task consists of 3 sub-tasks, sub-task A focuses on detecting whether the tweet is offensive or not. Then, For offensive Tweets, sub-task B focuses on detecting whether the tweet is hate speech or not. Finally, For hate speech Tweets, sub-task C focuses on detecting the fine-grained type of hate speech among six different classes. Transformer models proved their efficiency in classification tasks, but with the problem of over-fitting when fine-tuned on a small or an imbalanced dataset. We overcome this limitation by investigating multiple training paradigms such as Contrastive learning and Multi-task learning along with Classification fine-tuning and an ensemble of our top 5 performers. Our proposed solution achieved 0.841, 0.817, and 0.476 macro F1-average in sub-tasks A, B, and C respectively. | https://arxiv.org/abs/2207.08557v1 | https://arxiv.org/pdf/2207.08557v1.pdf | null | [
"Ahmad Shapiro",
"Ayman Khalafallah",
"Marwan Torki"
] | [
"Contrastive Learning",
"Multi-Task Learning"
] | 1,658,102,400,000 | [
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] | 33,791 |
100,961 | https://paperswithcode.com/paper/comparison-of-machine-learning-models-in-food | 1905.07302 | Comparison of Machine Learning Models in Food Authentication Studies | The underlying objective of food authentication studies is to determine whether unknown food samples have been correctly labelled. In this paper we study three near infrared (NIR) spectroscopic datasets from food samples of different types: meat samples (labelled by species), olive oil samples (labelled by their geographic origin) and honey samples (labelled as pure or adulterated by different adulterants). We apply and compare a large number of classification, dimension reduction and variable selection approaches to these datasets. NIR data pose specific challenges to classification and variable selection: the datasets are high - dimensional where the number of cases ($n$) $<<$ number of features ($p$) and the recorded features are highly serially correlated. In this paper we carry out comparative analysis of different approaches and find that partial least squares, a classic tool employed for these types of data, outperforms all the other approaches considered. | https://arxiv.org/abs/1905.07302v1 | https://arxiv.org/pdf/1905.07302v1.pdf | null | [
"Manokamna Singh",
"Katarina Domijan"
] | [
"Dimensionality Reduction",
"Classification",
"Variable Selection"
] | 1,558,051,200,000 | [] | 74,256 |
191,958 | https://paperswithcode.com/paper/mimo-ilc-for-precision-sea-robots-using-input | 2010.04487 | MIMO ILC for Precision SEA robots using Input-weighted Complex-Kernel Regression | This work improves the positioning precision of lightweight robots with
series elastic actuators (SEAs). Lightweight SEA robots, along with
low-impedance control, can maneuver without causing damage in uncertain,
confined spaces such as inside an aircraft wing during aircraft assembly.
Nevertheless, substantial modeling uncertainties in SEA robots reduce the
precision achieved by model-based approaches such as inversion-based
feedforward. Therefore, this article improves the precision of SEA robots
around specified operating points, through a multi-input multi-output (MIMO),
iterative learning control (ILC) approach. The main contributions of this
article are to (i) introduce an input-weighted complex kernel to estimate local
MIMO models using complex Gaussian process regression (c-GPR) (ii) develop
Ger\v{s}gorin-theorem-based conditions on the iteration gains for ensuring ILC
convergence to precision within noise-related limits, even with errors in the
estimated model; and (iii) demonstrate precision positioning with an
experimental SEA robot. Comparative experimental results, with and without ILC,
show around 90% improvement in the positioning precision (close to the
repeatability limit of the robot) and a 10-times increase in the SEA robot's
operating speed with the use of the MIMO ILC. | http://arxiv.org/abs/2010.04487v2 | http://arxiv.org/pdf/2010.04487v2.pdf | null | [] | [
"GPR"
] | 1,603,411,200,000 | [
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] | 16,733 |
258,466 | https://paperswithcode.com/paper/impact-of-benign-modifications-on | 2111.07468 | Impact of Benign Modifications on Discriminative Performance of Deepfake Detectors | Deepfakes are becoming increasingly popular in both good faith applications such as in entertainment and maliciously intended manipulations such as in image and video forgery. Primarily motivated by the latter, a large number of deepfake detectors have been proposed recently in order to identify such content. While the performance of such detectors still need further improvements, they are often assessed in simple if not trivial scenarios. In particular, the impact of benign processing operations such as transcoding, denoising, resizing and enhancement are not sufficiently studied. This paper proposes a more rigorous and systematic framework to assess the performance of deepfake detectors in more realistic situations. It quantitatively measures how and to which extent each benign processing approach impacts a state-of-the-art deepfake detection method. By illustrating it in a popular deepfake detector, our benchmark proposes a framework to assess robustness of detectors and provides valuable insights to design more efficient deepfake detectors. | https://arxiv.org/abs/2111.07468v1 | https://arxiv.org/pdf/2111.07468v1.pdf | null | [
"Yuhang Lu",
"Evgeniy Upenik",
"Touradj Ebrahimi"
] | [
"DeepFake Detection",
"Denoising",
"Face Swapping"
] | 1,636,848,000,000 | [] | 181,932 |
314,463 | https://paperswithcode.com/paper/learning-6d-pose-estimation-from-synthetic | 2208.14288 | Learning 6D Pose Estimation from Synthetic RGBD Images for Robotic Applications | In this work, we propose a data generation pipeline by leveraging the 3D suite Blender to produce synthetic RGBD image datasets with 6D poses for robotic picking. The proposed pipeline can efficiently generate large amounts of photo-realistic RGBD images for the object of interest. In addition, a collection of domain randomization techniques is introduced to bridge the gap between real and synthetic data. Furthermore, we develop a real-time two-stage 6D pose estimation approach by integrating the object detector YOLO-V4-tiny and the 6D pose estimation algorithm PVN3D for time sensitive robotics applications. With the proposed data generation pipeline, our pose estimation approach can be trained from scratch using only synthetic data without any pre-trained models. The resulting network shows competitive performance compared to state-of-the-art methods when evaluated on LineMod dataset. We also demonstrate the proposed approach in a robotic experiment, grasping a household object from cluttered background under different lighting conditions. | https://arxiv.org/abs/2208.14288v1 | https://arxiv.org/pdf/2208.14288v1.pdf | null | [
"Hongpeng Cao",
"Lukas Dirnberger",
"Daniele Bernardini",
"Cristina Piazza",
"Marco Caccamo"
] | [
"6D Pose Estimation",
"Pose Estimation"
] | 1,661,817,600,000 | [
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10",
"description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)",
"full_name": "RoIPool",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIPool",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
},
{
"code_snippet_url": "https://github.com/facebookresearch/detectron2/blob/bb9f5d8e613358519c9865609ab3fe7b6571f2ba/detectron2/layers/roi_align.py#L51",
"description": "**Region of Interest Align**, or **RoIAlign**, is an operation for extracting a small feature map from each RoI in detection and segmentation based tasks. It removes the harsh quantization of [RoI Pool](https://paperswithcode.com/method/roi-pooling), properly *aligning* the extracted features with the input. To avoid any quantization of the RoI boundaries or bins (using $x/16$ instead of $[x/16]$), RoIAlign uses bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and the result is then aggregated (using max or average).",
"full_name": "RoIAlign",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIAlign",
"source_title": "Mask R-CNN",
"source_url": "http://arxiv.org/abs/1703.06870v3"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Blender** is a proposal-based instance mask generation module which incorporates rich instance-level information with accurate dense pixel features. A single [convolution](https://paperswithcode.com/method/convolution) layer is added on top of the detection towers to produce attention masks along with each bounding box prediction. For each predicted instance, the blender crops predicted bases with its bounding box and linearly combines them according the learned attention maps.\r\n\r\nThe inputs of the blender module are bottom-level bases $\\mathbf{B}$, the selected top-level attentions $A$ and bounding box proposals $P$. First [RoIPool](https://paperswithcode.com/method/roi-pooling) of Mask R-CNN to crop bases with each proposal $\\mathbf{p}\\_{d}$ and then resize the region to a fixed size $R \\times R$ feature map $\\mathbf{r}\\_{d}$\r\n\r\n$$\r\n\\mathbf{r}\\_{d}=\\operatorname{RoIPool}_{R \\times R}\\left(\\mathbf{B}, \\mathbf{p}\\_{d}\\right), \\quad \\forall d \\in\\{1 \\ldots D\\}\r\n$$\r\n\r\nMore specifically, asampling ratio 1 is used for [RoIAlign](https://paperswithcode.com/method/roi-align), i.e. one bin for each sampling point. During training, ground truth boxes are used as the proposals. During inference, [FCOS](https://paperswithcode.com/method/fcos) prediction results are used.\r\n\r\nThe attention size $M$ is smaller than $R$. We interpolate $\\mathbf{a}\\_{d}$ from $M \\times M$ to $R \\times R$, into the shapes of $R=\\left\\(\\mathbf{r}\\_{d} \\mid d=1 \\ldots D\\right)$\r\n\r\n$$\r\n\\mathbf{a}\\_{d}^{\\prime}=\\text { interpolate }\\_{M \\times M \\rightarrow R \\times R}\\left(\\mathbf{a}\\_{d}\\right), \\quad \\forall d \\in\\{1 \\ldots D\\}\r\n$$\r\n\r\nThen $\\mathbf{a}\\_{d}^{\\prime}$ is normalized with a softmax function along the $K$ dimension to make it a set of score maps $\\mathbf{s}\\_{d}$.\r\n\r\n$$\r\n\\mathbf{s}\\_{d}=\\operatorname{softmax}\\left(\\mathbf{a}\\_{d}^{\\prime}\\right), \\quad \\forall d \\in\\{1 \\ldots D\\}\r\n$$\r\n\r\nThen we apply element-wise product between each entity $\\mathbf{r}\\_{d}, \\mathbf{s}\\_{d}$ of the regions $R$ and scores $S$, and sum along the $K$ dimension to get our mask logit $\\mathbf{m}\\_{d}:$\r\n\r\n$$\r\n\\mathbf{m}\\_{d}=\\sum\\_{k=1}^{K} \\mathbf{s}\\_{d}^{k} \\circ \\mathbf{r}\\_{d}^{k}, \\quad \\forall d \\in\\{1 \\ldots D\\}\r\n$$\r\n\r\nwhere $k$ is the index of the basis. The mask blending process with $K=4$ is visualized in the Figure.",
"full_name": "Blender",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Blender",
"source_title": "BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation",
"source_url": "https://arxiv.org/abs/2001.00309v3"
}
] | 94,421 |
206,709 | https://paperswithcode.com/paper/z-distance-function-for-knn-classification | 2103.09704 | Z Distance Function for KNN Classification | This paper proposes a new distance metric function, called Z distance, for KNN classification. The Z distance function is not a geometric direct-line distance between two data points. It gives a consideration to the class attribute of a training dataset when measuring the affinity between data points. Concretely speaking, the Z distance of two data points includes their class center distance and real distance. And its shape looks like "Z". In this way, the affinity of two data points in the same class is always stronger than that in different classes. Or, the intraclass data points are always closer than those interclass data points. We evaluated the Z distance with experiments, and demonstrated that the proposed distance function achieved better performance in KNN classification. | https://arxiv.org/abs/2103.09704v1 | https://arxiv.org/pdf/2103.09704v1.pdf | null | [
"Shichao Zhang",
"Jiaye Li"
] | [
"Classification",
"Classification"
] | 1,615,939,200,000 | [] | 25,232 |
239,307 | https://paperswithcode.com/paper/lexico-semantic-and-affective-modelling-of | 2109.04152 | Lexico-semantic and affective modelling of Spanish poetry: A semi-supervised learning approach | Text classification tasks have improved substantially during the last years by the usage of transformers. However, the majority of researches focus on prose texts, with poetry receiving less attention, specially for Spanish language. In this paper, we propose a semi-supervised learning approach for inferring 21 psychological categories evoked by a corpus of 4572 sonnets, along with 10 affective and lexico-semantic multiclass ones. The subset of poems used for training an evaluation includes 270 sonnets. With our approach, we achieve an AUC beyond 0.7 for 76% of the psychological categories, and an AUC over 0.65 for 60% on the multiclass ones. The sonnets are modelled using transformers, through sentence embeddings, along with lexico-semantic and affective features, obtained by using external lexicons. Consequently, we see that this approach provides an AUC increase of up to 0.12, as opposed to using transformers alone. | https://arxiv.org/abs/2109.04152v2 | https://arxiv.org/pdf/2109.04152v2.pdf | null | [
"Alberto Barbado",
"María Dolores González",
"Débora Carrera"
] | [
"Sentence Embedding",
"Text Classification",
"Text Classification"
] | 1,631,145,600,000 | [] | 167,757 |
52,060 | https://paperswithcode.com/paper/unsupervised-metric-learning-in-presence-of | 1807.07610 | Unsupervised Metric Learning in Presence of Missing Data | For many machine learning tasks, the input data lie on a low-dimensional
manifold embedded in a high dimensional space and, because of this
high-dimensional structure, most algorithms are inefficient. The typical
solution is to reduce the dimension of the input data using standard dimension
reduction algorithms such as ISOMAP, LAPLACIAN EIGENMAPS or LLES. This
approach, however, does not always work in practice as these algorithms require
that we have somewhat ideal data. Unfortunately, most data sets either have
missing entries or unacceptably noisy values. That is, real data are far from
ideal and we cannot use these algorithms directly. In this paper, we focus on
the case when we have missing data. Some techniques, such as matrix completion,
can be used to fill in missing data but these methods do not capture the
non-linear structure of the manifold. Here, we present a new algorithm
MR-MISSING that extends these previous algorithms and can be used to compute
low dimensional representation on data sets with missing entries. We
demonstrate the effectiveness of our algorithm by running three different
experiments. We visually verify the effectiveness of our algorithm on synthetic
manifolds, we numerically compare our projections against those computed by
first filling in data using nlPCA and mDRUR on the MNIST data set, and we also
show that we can do classification on MNIST with missing data. We also provide
a theoretical guarantee for MR-MISSING under some simplifying assumptions. | http://arxiv.org/abs/1807.07610v3 | http://arxiv.org/pdf/1807.07610v3.pdf | null | [
"Anna C. Gilbert",
"Rishi Sonthalia"
] | [
"Dimensionality Reduction",
"Matrix Completion",
"Metric Learning"
] | 1,531,958,400,000 | [] | 168,831 |
74,343 | https://paperswithcode.com/paper/beyond-the-one-step-greedy-approach-in | null | Beyond the One-Step Greedy Approach in Reinforcement Learning |
The famous Policy Iteration algorithm alternates between policy improvement and policy evaluation. Implementations of this algorithm with several variants of the latter evaluation stage, e.g, n-step and trace-based returns, have been analyzed in previous works. However, the case of multiple-step lookahead policy improvement, despite the recent increase in empirical evidence of its strength, has to our knowledge not been carefully analyzed yet. In this work, we introduce the first such analysis. Namely, we formulate variants of multiple-step policy improvement, derive new algorithms using these definitions and prove their convergence. Moreover, we show that recent prominent Reinforcement Learning algorithms are, in fact, instances of our framework. We thus shed light on their empirical success and give a recipe for deriving new algorithms for future study.
| https://icml.cc/Conferences/2018/Schedule?showEvent=2126 | http://proceedings.mlr.press/v80/efroni18a/efroni18a.pdf | ICML 2018 7 | [
"Yonathan Efroni",
"Gal Dalal",
"Bruno Scherrer",
"Shie Mannor"
] | [
"reinforcement-learning"
] | 1,530,403,200,000 | [] | 27,784 |
200,637 | https://paperswithcode.com/paper/controlling-hallucinations-at-word-level-in | 2102.02810 | Controlling Hallucinations at Word Level in Data-to-Text Generation | Data-to-Text Generation (DTG) is a subfield of Natural Language Generation aiming at transcribing structured data in natural language descriptions. The field has been recently boosted by the use of neural-based generators which exhibit on one side great syntactic skills without the need of hand-crafted pipelines; on the other side, the quality of the generated text reflects the quality of the training data, which in realistic settings only offer imperfectly aligned structure-text pairs. Consequently, state-of-art neural models include misleading statements - usually called hallucinations - in their outputs. The control of this phenomenon is today a major challenge for DTG, and is the problem addressed in the paper. Previous work deal with this issue at the instance level: using an alignment score for each table-reference pair. In contrast, we propose a finer-grained approach, arguing that hallucinations should rather be treated at the word level. Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance. These labels are obtained following a simple and efficient scoring procedure based on co-occurrence analysis and dependency parsing. Extensive evaluations, via automated metrics and human judgment on the standard WikiBio benchmark, show the accuracy of our alignment labels and the effectiveness of the proposed Multi-Branch Decoder. Our model is able to reduce and control hallucinations, while keeping fluency and coherence in generated texts. Further experiments on a degraded version of ToTTo show that our model could be successfully used on very noisy settings. | https://arxiv.org/abs/2102.02810v2 | https://arxiv.org/pdf/2102.02810v2.pdf | null | [
"Clément Rebuffel",
"Marco Roberti",
"Laure Soulier",
"Geoffrey Scoutheeten",
"Rossella Cancelliere",
"Patrick Gallinari"
] | [
"Data-to-Text Generation",
"Dependency Parsing",
"Table-to-Text Generation",
"Text Generation"
] | 1,612,396,800,000 | [] | 170,755 |
243,757 | https://paperswithcode.com/paper/on-the-robustness-of-model-based-algorithms | 2109.14028 | On the robustness of model-based algorithms for photoacoustic tomography: comparison between time and frequency domains | For photoacoustic image reconstruction, certain parameters such as sensor positions and speed of sound have a major impact in the reconstruction process and must be carefully determined before data acquisition. Uncertainties in these parameters can lead to errors produced by a modeling mismatch, hindering the reconstruction process and severely affecting the resulting image quality. Therefore, in this work we study how modeling errors arising from uncertainty in sensor locations affect the images obtained by matrix model-based reconstruction algorithms based on time domain and frequency domain models of the photoacoustic problem. The effects on the reconstruction performance with respect to the uncertainty in the knowledge of the sensors location is compared and analyzed both in a qualitative and quantitative fashion for both time and frequency models. Ultimately, our study shows that the frequency domain approach is more sensitive to this kind of modeling errors. These conclusions are supported by numerical experiments and a theoretical sensitivity analysis of the mathematical operator for the direct problem. | https://arxiv.org/abs/2109.14028v1 | https://arxiv.org/pdf/2109.14028v1.pdf | null | [
"L. Hirsch",
"M. G. Gonzalez",
"L. Rey Vega"
] | [
"Image Reconstruction"
] | 1,632,787,200,000 | [] | 156,287 |
165,396 | https://paperswithcode.com/paper/graph-neural-induction-of-value-iteration | 2009.12604 | Graph neural induction of value iteration | Many reinforcement learning tasks can benefit from explicit planning based on an internal model of the environment. Previously, such planning components have been incorporated through a neural network that partially aligns with the computational graph of value iteration. Such network have so far been focused on restrictive environments (e.g. grid-worlds), and modelled the planning procedure only indirectly. We relax these constraints, proposing a graph neural network (GNN) that executes the value iteration (VI) algorithm, across arbitrary environment models, with direct supervision on the intermediate steps of VI. The results indicate that GNNs are able to model value iteration accurately, recovering favourable metrics and policies across a variety of out-of-distribution tests. This suggests that GNN executors with strong supervision are a viable component within deep reinforcement learning systems. | https://arxiv.org/abs/2009.12604v1 | https://arxiv.org/pdf/2009.12604v1.pdf | null | [
"Andreea Deac",
"Pierre-Luc Bacon",
"Jian Tang"
] | [
"reinforcement-learning"
] | 1,601,078,400,000 | [] | 79,107 |
8,726 | https://paperswithcode.com/paper/algorithmic-detectability-threshold-of-the | 1710.08841 | Algorithmic detectability threshold of the stochastic block model | The assumption that the values of model parameters are known or correctly
learned, i.e., the Nishimori condition, is one of the requirements for the
detectability analysis of the stochastic block model in statistical inference.
In practice, however, there is no example demonstrating that we can know the
model parameters beforehand, and there is no guarantee that the model
parameters can be learned accurately. In this study, we consider the
expectation--maximization (EM) algorithm with belief propagation (BP) and
derive its algorithmic detectability threshold. Our analysis is not restricted
to the community structure, but includes general modular structures. Because
the algorithm cannot always learn the planted model parameters correctly, the
algorithmic detectability threshold is qualitatively different from the one
with the Nishimori condition. | http://arxiv.org/abs/1710.08841v2 | http://arxiv.org/pdf/1710.08841v2.pdf | null | [
"Tatsuro Kawamoto"
] | [
"Stochastic Block Model"
] | 1,508,803,200,000 | [] | 85,720 |
106,981 | https://paperswithcode.com/paper/convergence-of-adversarial-training-in | 1906.07916 | Convergence of Adversarial Training in Overparametrized Neural Networks | Neural networks are vulnerable to adversarial examples, i.e. inputs that are imperceptibly perturbed from natural data and yet incorrectly classified by the network. Adversarial training, a heuristic form of robust optimization that alternates between minimization and maximization steps, has proven to be among the most successful methods to train networks to be robust against a pre-defined family of perturbations. This paper provides a partial answer to the success of adversarial training, by showing that it converges to a network where the surrogate loss with respect to the the attack algorithm is within $\epsilon$ of the optimal robust loss. Then we show that the optimal robust loss is also close to zero, hence adversarial training finds a robust classifier. The analysis technique leverages recent work on the analysis of neural networks via Neural Tangent Kernel (NTK), combined with motivation from online-learning when the maximization is solved by a heuristic, and the expressiveness of the NTK kernel in the $\ell_\infty$-norm. In addition, we also prove that robust interpolation requires more model capacity, supporting the evidence that adversarial training requires wider networks. | https://arxiv.org/abs/1906.07916v2 | https://arxiv.org/pdf/1906.07916v2.pdf | NeurIPS 2019 12 | [
"Ruiqi Gao",
"Tianle Cai",
"Haochuan Li",
"Li-Wei Wang",
"Cho-Jui Hsieh",
"Jason D. Lee"
] | [
"online learning"
] | 1,560,902,400,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": "Neural Tangent Kernel",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Kernel Methods",
"parent": null
},
"name": "NTK",
"source_title": "Neural Tangent Kernel: Convergence and Generalization in Neural Networks",
"source_url": "https://arxiv.org/abs/1806.07572v4"
}
] | 188,276 |
177,223 | https://paperswithcode.com/paper/persistent-reductions-in-regularized-loss | 2011.14549 | Persistent Reductions in Regularized Loss Minimization for Variable Selection | In the context of regularized loss minimization with polyhedral gauges, we show that for a broad class of loss functions (possibly non-smooth and non-convex) and under a simple geometric condition on the input data it is possible to efficiently identify a subset of features which are guaranteed to have zero coefficients in all optimal solutions in all problems with loss functions from said class, before any iterative optimization has been performed for the original problem. This procedure is standalone, takes only the data as input, and does not require any calls to the loss function. Therefore, we term this procedure as a persistent reduction for the aforementioned class of regularized loss minimization problems. This reduction can be efficiently implemented via an extreme ray identification subroutine applied to a polyhedral cone formed from the datapoints. We employ an existing output-sensitive algorithm for extreme ray identification which makes our guarantee and algorithm applicable in ultra-high dimensional problems. | https://arxiv.org/abs/2011.14549v1 | https://arxiv.org/pdf/2011.14549v1.pdf | null | [
"Amin Jalali"
] | [
"Variable Selection"
] | 1,606,694,400,000 | [] | 139,864 |
172,030 | https://paperswithcode.com/paper/on-learning-text-style-transfer-with-direct | 2010.12771 | On Learning Text Style Transfer with Direct Rewards | In most cases, the lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task. In this paper, we explore training algorithms that instead optimize reward functions that explicitly consider different aspects of the style-transferred outputs. In particular, we leverage semantic similarity metrics originally used for fine-tuning neural machine translation models to explicitly assess the preservation of content between system outputs and input texts. We also investigate the potential weaknesses of the existing automatic metrics and propose efficient strategies of using these metrics for training. The experimental results show that our model provides significant gains in both automatic and human evaluation over strong baselines, indicating the effectiveness of our proposed methods and training strategies. | https://arxiv.org/abs/2010.12771v2 | https://arxiv.org/pdf/2010.12771v2.pdf | NAACL 2021 4 | [
"Yixin Liu",
"Graham Neubig",
"John Wieting"
] | [
"Machine Translation",
"Semantic Similarity",
"Semantic Textual Similarity",
"Style Transfer",
"Text Style Transfer"
] | 1,603,497,600,000 | [] | 75,609 |
117,146 | https://paperswithcode.com/paper/dimension-estimation-using-autoencoders | 1909.10702 | Dimension Estimation Using Autoencoders | Dimension Estimation (DE) and Dimension Reduction (DR) are two closely related topics, but with quite different goals. In DE, one attempts to estimate the intrinsic dimensionality or number of latent variables in a set of measurements of a random vector. However, in DR, one attempts to project a random vector, either linearly or non-linearly, to a lower dimensional space that preserves the information contained in the original higher dimensional space. Of course, these two ideas are quite closely linked since, for example, doing DR to a dimension smaller than suggested by DE will likely lead to information loss. Accordingly, in this paper we will focus on a particular class of deep neural networks called autoencoders which are used extensively for DR but are less well studied for DE. We show that several important questions arise when using autoencoders for DE, above and beyond those that arise for more classic DR/DE techniques such as Principal Component Analysis. We address autoencoder architectural choices and regularization techniques that allow one to transform autoencoder latent layer representations into estimates of intrinsic dimension. | https://arxiv.org/abs/1909.10702v1 | https://arxiv.org/pdf/1909.10702v1.pdf | null | [
"Nitish Bahadur",
"Randy Paffenroth"
] | [
"Dimensionality Reduction"
] | 1,569,283,200,000 | [
{
"code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38",
"description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).\r\n\r\nImage: [Michael Massi](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)",
"full_name": "AutoEncoder",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "AutoEncoder",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] | 151,406 |
279,361 | https://paperswithcode.com/paper/exploring-edge-disentanglement-for-node | 2202.11245 | Exploring Edge Disentanglement for Node Classification | Edges in real-world graphs are typically formed by a variety of factors and carry diverse relation semantics. For example, connections in a social network could indicate friendship, being colleagues, or living in the same neighborhood. However, these latent factors are usually concealed behind mere edge existence due to the data collection and graph formation processes. Despite rapid developments in graph learning over these years, most models take a holistic approach and treat all edges as equal. One major difficulty in disentangling edges is the lack of explicit supervisions. In this work, with close examination of edge patterns, we propose three heuristics and design three corresponding pretext tasks to guide the automatic edge disentanglement. Concretely, these self-supervision tasks are enforced on a designed edge disentanglement module to be trained jointly with the downstream node classification task to encourage automatic edge disentanglement. Channels of the disentanglement module are expected to capture distinguishable relations and neighborhood interactions, and outputs from them are aggregated as node representations. The proposed DisGNN is easy to be incorporated with various neural architectures, and we conduct experiments on $6$ real-world datasets. Empirical results show that it can achieve significant performance gains. | https://arxiv.org/abs/2202.11245v1 | https://arxiv.org/pdf/2202.11245v1.pdf | null | [
"Tianxiang Zhao",
"Xiang Zhang",
"Suhang Wang"
] | [
"Classification",
"Disentanglement",
"Graph Learning",
"Node Classification"
] | 1,645,574,400,000 | [] | 37,332 |
31,271 | https://paperswithcode.com/paper/multi-task-and-lifelong-learning-of-kernels | 1602.06531 | Multi-task and Lifelong Learning of Kernels | We consider a problem of learning kernels for use in SVM classification in
the multi-task and lifelong scenarios and provide generalization bounds on the
error of a large margin classifier. Our results show that, under mild
conditions on the family of kernels used for learning, solving several related
tasks simultaneously is beneficial over single task learning. In particular, as
the number of observed tasks grows, assuming that in the considered family of
kernels there exists one that yields low approximation error on all tasks, the
overhead associated with learning such a kernel vanishes and the complexity
converges to that of learning when this good kernel is given to the learner. | http://arxiv.org/abs/1602.06531v2 | http://arxiv.org/pdf/1602.06531v2.pdf | null | [
"Anastasia Pentina",
"Shai Ben-David"
] | [
"Classification",
"Generalization Bounds"
] | 1,456,012,800,000 | [
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] | 101,183 |
40,291 | https://paperswithcode.com/paper/multilingual-open-relation-extraction-using | 1503.06450 | Multilingual Open Relation Extraction Using Cross-lingual Projection | Open domain relation extraction systems identify relation and argument phrases in a sentence without relying on any underlying schema. However, current state-of-the-art relation extraction systems are available only for English because of their heavy reliance on linguistic tools such as part-of-speech taggers and dependency parsers. We present a cross-lingual annotation projection method for language independent relation extraction. We evaluate our method on a manually annotated test set and present results on three typologically different languages. We release these manual annotations and extracted relations in 61 languages from Wikipedia. | https://arxiv.org/abs/1503.06450v3 | https://arxiv.org/pdf/1503.06450v3.pdf | HLT 2015 5 | [
"Manaal Faruqui",
"Shankar Kumar"
] | [
"Relation Extraction"
] | 1,426,982,400,000 | [] | 186,275 |
173,040 | https://paperswithcode.com/paper/warped-language-models-for-noise-robust | 2011.01900 | Warped Language Models for Noise Robust Language Understanding | Masked Language Models (MLM) are self-supervised neural networks trained to fill in the blanks in a given sentence with masked tokens. Despite the tremendous success of MLMs for various text based tasks, they are not robust for spoken language understanding, especially for spontaneous conversational speech recognition noise. In this work we introduce Warped Language Models (WLM) in which input sentences at training time go through the same modifications as in MLM, plus two additional modifications, namely inserting and dropping random tokens. These two modifications extend and contract the sentence in addition to the modifications in MLMs, hence the word "warped" in the name. The insertion and drop modification of the input text during training of WLM resemble the types of noise due to Automatic Speech Recognition (ASR) errors, and as a result WLMs are likely to be more robust to ASR noise. Through computational results we show that natural language understanding systems built on top of WLMs perform better compared to those built based on MLMs, especially in the presence of ASR errors. | https://arxiv.org/abs/2011.01900v1 | https://arxiv.org/pdf/2011.01900v1.pdf | null | [
"Mahdi Namazifar",
"Gokhan Tur",
"Dilek Hakkani Tür"
] | [
"Automatic Speech Recognition",
"Natural Language Understanding",
"Speech Recognition",
"Speech Recognition",
"Spoken Language Understanding"
] | 1,604,361,600,000 | [] | 37,172 |
278,109 | https://paperswithcode.com/paper/timereise-time-series-randomized-evolving | 2202.07952 | TimeREISE: Time-series Randomized Evolving Input Sample Explanation | Deep neural networks are one of the most successful classifiers across different domains. However, due to their limitations concerning interpretability their use is limited in safety critical context. The research field of explainable artificial intelligence addresses this problem. However, most of the interpretability methods are aligned to the image modality by design. The paper introduces TimeREISE a model agnostic attribution method specifically aligned to success in the context of time series classification. The method shows superior performance compared to existing approaches concerning different well-established measurements. TimeREISE is applicable to any time series classification network, its runtime does not scale in a linear manner concerning the input shape and it does not rely on prior data knowledge. | https://arxiv.org/abs/2202.07952v2 | https://arxiv.org/pdf/2202.07952v2.pdf | null | [
"Dominique Mercier",
"Andreas Dengel",
"Sheraz Ahmed"
] | [
"Classification",
"Explainable artificial intelligence",
"Time Series",
"Time Series Classification"
] | 1,644,969,600,000 | [] | 68,235 |
168,724 | https://paperswithcode.com/paper/high-order-semantic-role-labeling | 2010.04641 | High-order Semantic Role Labeling | Semantic role labeling is primarily used to identify predicates, arguments, and their semantic relationships. Due to the limitations of modeling methods and the conditions of pre-identified predicates, previous work has focused on the relationships between predicates and arguments and the correlations between arguments at most, while the correlations between predicates have been neglected for a long time. High-order features and structure learning were very common in modeling such correlations before the neural network era. In this paper, we introduce a high-order graph structure for the neural semantic role labeling model, which enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs. Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models and further boost our baseline to achieve new state-of-the-art results. | https://arxiv.org/abs/2010.04641v1 | https://arxiv.org/pdf/2010.04641v1.pdf | Findings of the Association for Computational Linguistics 2020 | [
"Zuchao Li",
"Hai Zhao",
"Rui Wang",
"Kevin Parnow"
] | [
"Semantic Role Labeling"
] | 1,602,201,600,000 | [] | 92,743 |
116,899 | https://paperswithcode.com/paper/190909801 | 1909.09801 | Adversarial Learning of General Transformations for Data Augmentation | Data augmentation (DA) is fundamental against overfitting in large convolutional neural networks, especially with a limited training dataset. In images, DA is usually based on heuristic transformations, like geometric or color transformations. Instead of using predefined transformations, our work learns data augmentation directly from the training data by learning to transform images with an encoder-decoder architecture combined with a spatial transformer network. The transformed images still belong to the same class but are new, more complex samples for the classifier. Our experiments show that our approach is better than previous generative data augmentation methods, and comparable to predefined transformation methods when training an image classifier. | https://arxiv.org/abs/1909.09801v1 | https://arxiv.org/pdf/1909.09801v1.pdf | ICLR Workshop LLD 2019 | [
"Saypraseuth Mounsaveng",
"David Vazquez",
"Ismail Ben Ayed",
"Marco Pedersoli"
] | [
"Data Augmentation"
] | 1,569,024,000,000 | [
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/kevinzakka/spatial-transformer-network/blob/375f99046383316b18edfb5c575dc390c4ee3193/stn/transformer.py#L4",
"description": "A **Spatial Transformer** is an image model block that explicitly allows the spatial manipulation of data within a [convolutional neural network](https://paperswithcode.com/methods/category/convolutional-neural-networks). It gives CNNs the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. Unlike pooling layers, where the receptive fields are fixed and local, the spatial transformer module is a dynamic mechanism that can actively spatially transform an image (or a feature map) by producing an appropriate transformation for each input sample. The transformation is then performed on the entire feature map (non-locally) and can include scaling, cropping, rotations, as well as non-rigid deformations.\r\n\r\nThe architecture is shown in the Figure to the right. The input feature map $U$ is passed to a localisation network which regresses the transformation parameters $\\theta$. The regular spatial grid $G$ over $V$ is transformed to the sampling grid $T\\_{\\theta}\\left(G\\right)$, which is applied to $U$, producing the warped output feature map $V$. The combination of the localisation network and sampling mechanism defines a spatial transformer.",
"full_name": "Spatial Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Spatial Transformer",
"source_title": "Spatial Transformer Networks",
"source_url": "http://arxiv.org/abs/1506.02025v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
}
] | 49,007 |
56,996 | https://paperswithcode.com/paper/bayesian-deep-convolutional-networks-with | 1810.05148 | Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes | There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs). This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP. In this work, we derive an analogous equivalence for multi-layer convolutional neural networks (CNNs) both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels. We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible. Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical. As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent (SGD), is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case. We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation. | https://arxiv.org/abs/1810.05148v4 | https://arxiv.org/pdf/1810.05148v4.pdf | null | [
"Roman Novak",
"Lechao Xiao",
"Jaehoon Lee",
"Yasaman Bahri",
"Greg Yang",
"Jiri Hron",
"Daniel A. Abolafia",
"Jeffrey Pennington",
"Jascha Sohl-Dickstein"
] | [
"Gaussian Processes"
] | 1,539,216,000,000 | [
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/bf843c664b8ba0ff49d2921237500c77d82f2d04/torchvision/models/segmentation/fcn.py#L9",
"description": "**Fully Convolutional Networks**, or **FCNs**, are an architecture used mainly for semantic segmentation. They employ solely locally connected layers, such as [convolution](https://paperswithcode.com/method/convolution), pooling and upsampling. Avoiding the use of dense layers means less parameters (making the networks faster to train). It also means an FCN can work for variable image sizes given all connections are local.\r\n\r\nThe network consists of a downsampling path, used to extract and interpret the context, and an upsampling path, which allows for localization. \r\n\r\nFCNs also employ skip connections to recover the fine-grained spatial information lost in the downsampling path.",
"full_name": "Fully Convolutional Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "FCN",
"source_title": "Fully Convolutional Networks for Semantic Segmentation",
"source_url": "http://arxiv.org/abs/1605.06211v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] | 196,864 |
38,450 | https://paperswithcode.com/paper/extended-formulations-for-online-linear | 1311.5022 | Extended Formulations for Online Linear Bandit Optimization | On-line linear optimization on combinatorial action sets (d-dimensional
actions) with bandit feedback, is known to have complexity in the order of the
dimension of the problem. The exponential weighted strategy achieves the best
known regret bound that is of the order of $d^{2}\sqrt{n}$ (where $d$ is the
dimension of the problem, $n$ is the time horizon). However, such strategies
are provably suboptimal or computationally inefficient. The complexity is
attributed to the combinatorial structure of the action set and the dearth of
efficient exploration strategies of the set. Mirror descent with entropic
regularization function comes close to solving this problem by enforcing a
meticulous projection of weights with an inherent boundary condition. Entropic
regularization in mirror descent is the only known way of achieving a
logarithmic dependence on the dimension. Here, we argue otherwise and recover
the original intuition of exponential weighting by borrowing a technique from
discrete optimization and approximation algorithms called `extended
formulation'. Such formulations appeal to the underlying geometry of the set
with a guaranteed logarithmic dependence on the dimension underpinned by an
information theoretic entropic analysis. | http://arxiv.org/abs/1311.5022v3 | http://arxiv.org/pdf/1311.5022v3.pdf | null | [
"Shaona Ghosh",
"Adam Prugel-Bennett"
] | [
"Efficient Exploration"
] | 1,384,905,600,000 | [] | 53,780 |
313,695 | https://paperswithcode.com/paper/data-augmentation-on-graphs-for-table-type | 2208.11210 | Data augmentation on graphs for table type classification | Tables are widely used in documents because of their compact and structured representation of information. In particular, in scientific papers, tables can sum up novel discoveries and summarize experimental results, making the research comparable and easily understandable by scholars. Since the layout of tables is highly variable, it would be useful to interpret their content and classify them into categories. This could be helpful to directly extract information from scientific papers, for instance comparing performance of some models given their paper result tables. In this work, we address the classification of tables using a Graph Neural Network, exploiting the table structure for the message passing algorithm in use. We evaluate our model on a subset of the Tab2Know dataset. Since it contains few examples manually annotated, we propose data augmentation techniques directly on the table graph structures. We achieve promising preliminary results, proposing a data augmentation method suitable for graph-based table representation. | https://arxiv.org/abs/2208.11210v1 | https://arxiv.org/pdf/2208.11210v1.pdf | null | [
"Davide del Bimbo",
"Andrea Gemelli",
"Simone Marinai"
] | [
"Classification",
"Data Augmentation"
] | 1,661,212,800,000 | [] | 162,554 |
9,356 | https://paperswithcode.com/paper/deep-multi-view-spatial-temporal-network-for | 1802.08714 | Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction | Taxi demand prediction is an important building block to enabling intelligent
transportation systems in a smart city. An accurate prediction model can help
the city pre-allocate resources to meet travel demand and to reduce empty taxis
on streets which waste energy and worsen the traffic congestion. With the
increasing popularity of taxi requesting services such as Uber and Didi Chuxing
(in China), we are able to collect large-scale taxi demand data continuously.
How to utilize such big data to improve the demand prediction is an interesting
and critical real-world problem. Traditional demand prediction methods mostly
rely on time series forecasting techniques, which fail to model the complex
non-linear spatial and temporal relations. Recent advances in deep learning
have shown superior performance on traditionally challenging tasks such as
image classification by learning the complex features and correlations from
large-scale data. This breakthrough has inspired researchers to explore deep
learning techniques on traffic prediction problems. However, existing methods
on traffic prediction have only considered spatial relation (e.g., using CNN)
or temporal relation (e.g., using LSTM) independently. We propose a Deep
Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial
and temporal relations. Specifically, our proposed model consists of three
views: temporal view (modeling correlations between future demand values with
near time points via LSTM), spatial view (modeling local spatial correlation
via local CNN), and semantic view (modeling correlations among regions sharing
similar temporal patterns). Experiments on large-scale real taxi demand data
demonstrate effectiveness of our approach over state-of-the-art methods. | http://arxiv.org/abs/1802.08714v2 | http://arxiv.org/pdf/1802.08714v2.pdf | null | [
"Huaxiu Yao",
"Fei Wu",
"Jintao Ke",
"Xianfeng Tang",
"Yitian Jia",
"Siyu Lu",
"Pinghua Gong",
"Jieping Ye",
"Zhenhui Li"
] | [
"Image Classification",
"Time Series",
"Time Series Forecasting",
"Traffic Prediction"
] | 1,519,344,000,000 | [] | 103,942 |
216,436 | https://paperswithcode.com/paper/phenotyping-osa-a-time-series-analysis-using | 2104.13479 | Phenotyping OSA: a time series analysis using fuzzy clustering and persistent homology | Sleep apnea is a disorder that has serious consequences for the pediatric population. There has been recent concern that traditional diagnosis of the disorder using the apnea-hypopnea index may be ineffective in capturing its multi-faceted outcomes. In this work, we take a first step in addressing this issue by phenotyping patients using a clustering analysis of airflow time series. This is approached in three ways: using feature-based fuzzy clustering in the time and frequency domains, and using persistent homology to study the signal from a topological perspective. The fuzzy clusters are analyzed in a novel manner using a Dirichlet regression analysis, while the topological approach leverages Takens embedding theorem to study the periodicity properties of the signals. | https://arxiv.org/abs/2104.13479v1 | https://arxiv.org/pdf/2104.13479v1.pdf | null | [
"Prachi Loliencar",
"Giseon Heo"
] | [
"Time Series",
"Time Series Analysis"
] | 1,619,481,600,000 | [] | 132,745 |
56,619 | https://paperswithcode.com/paper/inhibited-softmax-for-uncertainty-estimation | 1810.01861 | Inhibited Softmax for Uncertainty Estimation in Neural Networks | We present a new method for uncertainty estimation and out-of-distribution
detection in neural networks with softmax output. We extend softmax layer with
an additional constant input. The corresponding additional output is able to
represent the uncertainty of the network. The proposed method requires neither
additional parameters nor multiple forward passes nor input preprocessing nor
out-of-distribution datasets. We show that our method performs comparably to
more computationally expensive methods and outperforms baselines on our
experiments from image recognition and sentiment analysis domains. | http://arxiv.org/abs/1810.01861v2 | http://arxiv.org/pdf/1810.01861v2.pdf | null | [
"Marcin Możejko",
"Mateusz Susik",
"Rafał Karczewski"
] | [
"Out-of-Distribution Detection",
"Sentiment Analysis"
] | 1,538,524,800,000 | [
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] | 175,052 |
62,951 | https://paperswithcode.com/paper/joint-learning-for-targeted-sentiment | null | Joint Learning for Targeted Sentiment Analysis | Targeted sentiment analysis (TSA) aims at extracting targets and classifying their sentiment classes. Previous works only exploit word embeddings as features and do not explore more potentials of neural networks when jointly learning the two tasks. In this paper, we carefully design the hierarchical stack bidirectional gated recurrent units (HSBi-GRU) model to learn abstract features for both tasks, and we propose a HSBi-GRU based joint model which allows the target label to have influence on their sentiment label. Experimental results on two datasets show that our joint learning model can outperform other baselines and demonstrate the effectiveness of HSBi-GRU in learning abstract features. | https://aclanthology.org/D18-1504 | https://aclanthology.org/D18-1504.pdf | EMNLP 2018 10 | [
"Dehong Ma",
"Sujian Li",
"Houfeng Wang"
] | [
"Sentiment Analysis",
"Word Embeddings"
] | 1,538,352,000,000 | [] | 382 |
273,004 | https://paperswithcode.com/paper/lemon-language-based-environment-manipulation | 2201.08081 | LEMON: Language-Based Environment Manipulation via Execution-Guided Pre-training | Language-based environment manipulation requires agents to manipulate the environment following natural language instructions, which is challenging due to the huge space of the environments. To address this challenge, various approaches have been proposed in recent work. Although these approaches work well for their intended environments, they are difficult to generalize across environments. In this work, we propose LEMON, a general framework for language-based environment manipulation tasks. Specifically, we first specify a general approach for language-based environment manipulation tasks, which can deal with various environments using the same generative language model. Then we propose an execution-guided pre-training strategy to inject prior knowledge of environments to the language model with a pure synthetic pre-training corpus. Experimental results on tasks including Alchemy, Scene, Tangrams, ProPara and Recipes demonstrate the effectiveness of LEMON: it achieves new state-of-the-art results on Alchemy, Scene, ProPara, and Recipes, and the execution-guided pre-training strategy brings remarkable improvements on all experimental tasks. | https://arxiv.org/abs/2201.08081v2 | https://arxiv.org/pdf/2201.08081v2.pdf | null | [
"Qi Shi",
"Qian Liu",
"Bei Chen",
"Yu Zhang",
"Ting Liu",
"Jian-Guang Lou"
] | [
"Language Modelling"
] | 1,642,636,800,000 | [] | 167,656 |
161,142 | https://paperswithcode.com/paper/generative-adversarial-imitation-learning-3 | null | Generative Adversarial Imitation Learning with Neural Network Parameterization: Global Optimality and Convergence Rate | Generative adversarial imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks. Different from reinforcement learning, GAIL learns both policy and reward function from expert (human) demonstration. Despite its empirical success, it remains unclear whether GAIL with neural networks converges to the globally optimal solution. The major difficulty comes from the nonconvex-nonconcave minimax optimization structure. To bridge the gap between practice and theory, we analyze a gradient-based algorithm with alternating updates and establish its sublinear convergence to the globally optimal solution. To the best of our knowledge, our analysis establishes the global optimality and convergence rate of GAIL with neural networks for the first time. | https://proceedings.icml.cc/static/paper_files/icml/2020/3185-Paper.pdf | https://proceedings.icml.cc/static/paper_files/icml/2020/3185-Paper.pdf | ICML 2020 1 | [
"Yufeng Zhang",
"Qi Cai",
"Zhuoran Yang",
"Zhaoran Wang"
] | [
"Imitation Learning",
"reinforcement-learning"
] | 1,577,836,800,000 | [] | 139,975 |
159,260 | https://paperswithcode.com/paper/real-time-cardiac-cine-mri-with-residual | 2008.05044 | Real-Time Cardiac Cine MRI with Residual Convolutional Recurrent Neural Network | Real-time cardiac cine MRI does not require ECG gating in the data acquisition and is more useful for patients who can not hold their breaths or have abnormal heart rhythms. However, to achieve fast image acquisition, real-time cine commonly acquires highly undersampled data, which imposes a significant challenge for MRI image reconstruction. We propose a residual convolutional RNN for real-time cardiac cine reconstruction. To the best of our knowledge, this is the first work applying deep learning approach to Cartesian real-time cardiac cine reconstruction. Based on the evaluation from radiologists, our deep learning model shows superior performance than compressed sensing. | https://arxiv.org/abs/2008.05044v2 | https://arxiv.org/pdf/2008.05044v2.pdf | null | [
"Eric Z. Chen",
"Xiao Chen",
"Jingyuan Lyu",
"Yuan Zheng",
"Terrence Chen",
"Jian Xu",
"Shanhui Sun"
] | [
"Image Reconstruction"
] | 1,597,190,400,000 | [] | 59,831 |
247,350 | https://paperswithcode.com/paper/self-supervised-prime-dual-networks-for-few | null | Self-Supervised Prime-Dual Networks for Few-Shot Image Classification | We construct a prime-dual network structure for few-shot learning which establishes a commutative relationship between the support set and the query set, as well as a new self- supervision constraint for highly effective few-shot learning. Specifically, the prime network performs the forward label prediction of the query set from the support set, while the dual network performs the reverse label prediction of the support set from the query set. This forward and reserve prediction process with commutated support and query sets forms a label prediction loop and establishes a self-supervision constraint between the ground-truth labels and their predicted values. This unique constraint can be used to significantly improve the training performance of few-shot learning through coupled prime and dual network training. It can be also used as an objective function for optimization during the testing stage to refine the query label prediction results. Our extensive experimental results demonstrate that the proposed self-supervised commutative learning and optimization outperforms existing state-of the-art few-shot learning methods by large margins on various benchmark datasets. | https://openreview.net/forum?id=SHnXjI3vTJ | https://openreview.net/pdf?id=SHnXjI3vTJ | null | [
"Wenming Cao",
"Qifan Liu",
"Guang Liu",
"Zhihai He"
] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Image Classification"
] | 1,632,873,600,000 | [] | 83,510 |
95,065 | https://paperswithcode.com/paper/asynchronous-federated-optimization | 1903.03934 | Asynchronous Federated Optimization | Federated learning enables training on a massive number of edge devices. To improve flexibility and scalability, we propose a new asynchronous federated optimization algorithm. We prove that the proposed approach has near-linear convergence to a global optimum, for both strongly convex and a restricted family of non-convex problems. Empirical results show that the proposed algorithm converges quickly and tolerates staleness in various applications. | https://arxiv.org/abs/1903.03934v5 | https://arxiv.org/pdf/1903.03934v5.pdf | null | [
"Cong Xie",
"Sanmi Koyejo",
"Indranil Gupta"
] | [
"Federated Learning"
] | 1,552,176,000,000 | [] | 93,664 |
239,335 | https://paperswithcode.com/paper/robust-optimal-classification-trees-against | 2109.03857 | Robust Optimal Classification Trees Against Adversarial Examples | Decision trees are a popular choice of explainable model, but just like neural networks, they suffer from adversarial examples. Existing algorithms for fitting decision trees robust against adversarial examples are greedy heuristics and lack approximation guarantees. In this paper we propose ROCT, a collection of methods to train decision trees that are optimally robust against user-specified attack models. We show that the min-max optimization problem that arises in adversarial learning can be solved using a single minimization formulation for decision trees with 0-1 loss. We propose such formulations in Mixed-Integer Linear Programming and Maximum Satisfiability, which widely available solvers can optimize. We also present a method that determines the upper bound on adversarial accuracy for any model using bipartite matching. Our experimental results demonstrate that the existing heuristics achieve close to optimal scores while ROCT achieves state-of-the-art scores. | https://arxiv.org/abs/2109.03857v1 | https://arxiv.org/pdf/2109.03857v1.pdf | null | [
"Daniël Vos",
"Sicco Verwer"
] | [
"Classification"
] | 1,631,059,200,000 | [] | 94,702 |
7,104 | https://paperswithcode.com/paper/joint-causal-inference-from-multiple-contexts | 1611.10351 | Joint Causal Inference from Multiple Contexts | The gold standard for discovering causal relations is by means of experimentation. Over the last decades, alternative methods have been proposed that can infer causal relations between variables from certain statistical patterns in purely observational data. We introduce Joint Causal Inference (JCI), a novel approach to causal discovery from multiple data sets from different contexts that elegantly unifies both approaches. JCI is a causal modeling framework rather than a specific algorithm, and it can be implemented using any causal discovery algorithm that can take into account certain background knowledge. JCI can deal with different types of interventions (e.g., perfect, imperfect, stochastic, etc.) in a unified fashion, and does not require knowledge of intervention targets or types in case of interventional data. We explain how several well-known causal discovery algorithms can be seen as addressing special cases of the JCI framework, and we also propose novel implementations that extend existing causal discovery methods for purely observational data to the JCI setting. We evaluate different JCI implementations on synthetic data and on flow cytometry protein expression data and conclude that JCI implementations can considerably outperform state-of-the-art causal discovery algorithms. | https://arxiv.org/abs/1611.10351v6 | https://arxiv.org/pdf/1611.10351v6.pdf | null | [
"Joris M. Mooij",
"Sara Magliacane",
"Tom Claassen"
] | [
"Causal Discovery",
"Causal Inference"
] | 1,480,464,000,000 | [
{
"code_snippet_url": null,
"description": "Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.",
"full_name": "Causal Inference",
"introduced_year": 2000,
"main_collection": null,
"name": "Causal Inference",
"source_title": null,
"source_url": null
}
] | 13,964 |
167,652 | https://paperswithcode.com/paper/exploring-target-driven-image-classification | null | Exploring Target Driven Image Classification | For a given image, traditional supervised image classification using deep neural networks is akin to answering the question 'what object category does this image belong to?'. The model takes in an image as input and produces the most likely label for it. However, there is an alternate approach to arrive at the final answer which we investigate in this paper. We argue that, for any arbitrary category $\mathit{\tilde{y}}$, the composed question 'Is this image of an object category $\mathit{\tilde{y}}$' serves as a viable approach for image classification via. deep neural networks. The difference lies in the supplied additional information in form of the target along with the image. Motivated by the curiosity to unravel the advantages and limitations of the addressed approach, we propose Indicator Neural Networks(INN). It utilizes a pair of image and label as input and produces a boolean response. INN consists of $2$ encoding components namely: label encoder and image encoder which learns latent representations for labels and images respectively. Predictor, the third component, combines the learnt individual label and image representations to make the final yes/no prediction. The network is trained end-to-end. We perform evaluations on image classification and fine-grained image classification datasets against strong baselines. We also investigate various components of INNs to understand their contribution in the final prediction of the model. Our probing of the modules reveals that, as opposed to traditionally trained deep counterpart, INN tends to much larger regions of the input image for generating the image features. The generated image feature is further refined by the generated label encoding prior to the final prediction. | https://openreview.net/forum?id=rQ55z6F-sY5 | https://openreview.net/pdf?id=rQ55z6F-sY5 | null | [
"Aditya Singh",
"Alessandro Bay",
"Andrea Mirabile"
] | [
"Classification",
"Fine-Grained Image Classification",
"Classification",
"Image Classification"
] | 1,609,459,200,000 | [] | 12,064 |
232,611 | https://paperswithcode.com/paper/modelling-latent-translations-for-cross | 2107.11353 | Modelling Latent Translations for Cross-Lingual Transfer | While achieving state-of-the-art results in multiple tasks and languages, translation-based cross-lingual transfer is often overlooked in favour of massively multilingual pre-trained encoders. Arguably, this is due to its main limitations: 1) translation errors percolating to the classification phase and 2) the insufficient expressiveness of the maximum-likelihood translation. To remedy this, we propose a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model, by treating the intermediate translations as a latent random variable. As a result, 1) the neural machine translation system can be fine-tuned with a variant of Minimum Risk Training where the reward is the accuracy of the downstream task classifier. Moreover, 2) multiple samples can be drawn to approximate the expected loss across all possible translations during inference. We evaluate our novel latent translation-based model on a series of multilingual NLU tasks, including commonsense reasoning, paraphrase identification, and natural language inference. We report gains for both zero-shot and few-shot learning setups, up to 2.7 accuracy points on average, which are even more prominent for low-resource languages (e.g., Haitian Creole). Finally, we carry out in-depth analyses comparing different underlying NMT models and assessing the impact of alternative translations on the downstream performance. | https://arxiv.org/abs/2107.11353v1 | https://arxiv.org/pdf/2107.11353v1.pdf | null | [
"Edoardo Maria Ponti",
"Julia Kreutzer",
"Ivan Vulić",
"Siva Reddy"
] | [
"Cross-Lingual Transfer",
"Few-Shot Learning",
"Machine Translation",
"Natural Language Inference",
"Paraphrase Identification"
] | 1,626,998,400,000 | [] | 12,744 |
76,000 | https://paperswithcode.com/paper/the-role-of-visual-saliency-in-the-automation | 1812.11960 | The role of visual saliency in the automation of seismic interpretation | In this paper, we propose a workflow based on SalSi for the detection and
delineation of geological structures such as salt domes. SalSi is a seismic
attribute designed based on the modeling of human visual system that detects
the salient features and captures the spatial correlation within seismic
volumes for delineating seismic structures. Using SalSi, we can not only
highlight the neighboring regions of salt domes to assist a seismic interpreter
but also delineate such structures using a region growing method and
post-processing. The proposed delineation workflow detects the salt-dome
boundary with very good precision and accuracy. Experimental results show the
effectiveness of the proposed workflow on a real seismic dataset acquired from
the North Sea, F3 block. For the subjective evaluation of the results of
different salt-dome delineation algorithms, we have used a reference salt-dome
boundary interpreted by a geophysicist. For the objective evaluation of
results, we have used five different metrics based on pixels, shape, and
curvedness to establish the effectiveness of the proposed workflow. The
proposed workflow is not only fast but also yields better results as compared
to other salt-dome delineation algorithms and shows a promising potential in
seismic interpretation. | http://arxiv.org/abs/1812.11960v1 | http://arxiv.org/pdf/1812.11960v1.pdf | null | [
"Muhammad Amir Shafiq",
"Tariq Alshawi",
"Zhiling Long",
"Ghassan AlRegib"
] | [
"Seismic Interpretation"
] | 1,546,214,400,000 | [] | 162,321 |
20,407 | https://paperswithcode.com/paper/deepskeleton-learning-multi-task-scale | 1609.03659 | DeepSkeleton: Learning Multi-task Scale-associated Deep Side Outputs for Object Skeleton Extraction in Natural Images | Object skeletons are useful for object representation and object detection.
They are complementary to the object contour, and provide extra information,
such as how object scale (thickness) varies among object parts. But object
skeleton extraction from natural images is very challenging, because it
requires the extractor to be able to capture both local and non-local image
context in order to determine the scale of each skeleton pixel. In this paper,
we present a novel fully convolutional network with multiple scale-associated
side outputs to address this problem. By observing the relationship between the
receptive field sizes of the different layers in the network and the skeleton
scales they can capture, we introduce two scale-associated side outputs to each
stage of the network. The network is trained by multi-task learning, where one
task is skeleton localization to classify whether a pixel is a skeleton pixel
or not, and the other is skeleton scale prediction to regress the scale of each
skeleton pixel. Supervision is imposed at different stages by guiding the
scale-associated side outputs toward the groundtruth skeletons at the
appropriate scales. The responses of the multiple scale-associated side outputs
are then fused in a scale-specific way to detect skeleton pixels using multiple
scales effectively. Our method achieves promising results on two skeleton
extraction datasets, and significantly outperforms other competitors.
Additionally, the usefulness of the obtained skeletons and scales (thickness)
are verified on two object detection applications: Foreground object
segmentation and object proposal detection. | http://arxiv.org/abs/1609.03659v3 | http://arxiv.org/pdf/1609.03659v3.pdf | null | [
"Wei Shen",
"Kai Zhao",
"Yuan Jiang",
"Yan Wang",
"Xiang Bai",
"Alan Yuille"
] | [
"Multi-Task Learning",
"Object Detection",
"Object Detection",
"Semantic Segmentation"
] | 1,473,724,800,000 | [] | 104,583 |
306,464 | https://paperswithcode.com/paper/psp-hdri-a-synthetic-dataset-generator-for | 2207.05025 | PSP-HDRI$+$: A Synthetic Dataset Generator for Pre-Training of Human-Centric Computer Vision Models | We introduce a new synthetic data generator PSP-HDRI$+$ that proves to be a superior pre-training alternative to ImageNet and other large-scale synthetic data counterparts. We demonstrate that pre-training with our synthetic data will yield a more general model that performs better than alternatives even when tested on out-of-distribution (OOD) sets. Furthermore, using ablation studies guided by person keypoint estimation metrics with an off-the-shelf model architecture, we show how to manipulate our synthetic data generator to further improve model performance. | https://arxiv.org/abs/2207.05025v1 | https://arxiv.org/pdf/2207.05025v1.pdf | null | [
"Salehe Erfanian Ebadi",
"Saurav Dhakad",
"Sanjay Vishwakarma",
"Chunpu Wang",
"You-Cyuan Jhang",
"Maciek Chociej",
"Adam Crespi",
"Alex Thaman",
"Sujoy Ganguly"
] | [
"Keypoint Estimation"
] | 1,657,497,600,000 | [] | 172,504 |
220,120 | https://paperswithcode.com/paper/global-context-for-improving-recognition-of | 2105.10156 | Global Context for improving recognition of Online Handwritten Mathematical Expressions | This paper presents a temporal classification method for all three subtasks of symbol segmentation, symbol recognition and relation classification in online handwritten mathematical expressions (HMEs). The classification model is trained by multiple paths of symbols and spatial relations derived from the Symbol Relation Tree (SRT) representation of HMEs. The method benefits from global context of a deep bidirectional Long Short-term Memory network, which learns the temporal classification directly from online handwriting by the Connectionist Temporal Classification loss. To recognize an online HME, a symbol-level parse tree with Context-Free Grammar is constructed, where symbols and spatial relations are obtained from the temporal classification results. We show the effectiveness of the proposed method on the two latest CROHME datasets. | https://arxiv.org/abs/2105.10156v1 | https://arxiv.org/pdf/2105.10156v1.pdf | null | [
"Cuong Tuan Nguyen",
"Thanh-Nghia Truong",
"Hung Tuan Nguyen",
"Masaki Nakagawa"
] | [
"Classification",
"Relation Classification"
] | 1,621,555,200,000 | [] | 150,849 |
123,312 | https://paperswithcode.com/paper/a-model-based-reinforcement-learning-with | 1911.03845 | Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation | Reinforcement learning is well suited for optimizing policies of recommender systems. Current solutions mostly focus on model-free approaches, which require frequent interactions with the real environment, and thus are expensive in model learning. Offline evaluation methods, such as importance sampling, can alleviate such limitations, but usually request a large amount of logged data and do not work well when the action space is large. In this work, we propose a model-based reinforcement learning solution which models user-agent interaction for offline policy learning via a generative adversarial network. To reduce bias in the learned model and policy, we use a discriminator to evaluate the quality of generated data and scale the generated rewards. Our theoretical analysis and empirical evaluations demonstrate the effectiveness of our solution in learning policies from the offline and generated data. | https://arxiv.org/abs/1911.03845v3 | https://arxiv.org/pdf/1911.03845v3.pdf | NeurIPS 2019 12 | [
"Xueying Bai",
"Jian Guan",
"Hongning Wang"
] | [
"Model-based Reinforcement Learning",
"Recommendation Systems",
"reinforcement-learning"
] | 1,573,344,000,000 | [] | 128,564 |
285,222 | https://paperswithcode.com/paper/amcad-adaptive-mixed-curvature-representation | 2203.14683 | AMCAD: Adaptive Mixed-Curvature Representation based Advertisement Retrieval System | Graph embedding based retrieval has become one of the most popular techniques in the information retrieval community and search engine industry. The classical paradigm mainly relies on the flat Euclidean geometry. In recent years, hyperbolic (negative curvature) and spherical (positive curvature) representation methods have shown their superiority to capture hierarchical and cyclic data structures respectively. However, in industrial scenarios such as e-commerce sponsored search platforms, the large-scale heterogeneous query-item-advertisement interaction graphs often have multiple structures coexisting. Existing methods either only consider a single geometry space, or combine several spaces manually, which are incapable and inflexible to model the complexity and heterogeneity in the real scenario. To tackle this challenge, we present a web-scale Adaptive Mixed-Curvature ADvertisement retrieval system (AMCAD) to automatically capture the complex and heterogeneous graph structures in non-Euclidean spaces. Specifically, entities are represented in adaptive mixed-curvature spaces, where the types and curvatures of the subspaces are trained to be optimal combinations. Besides, an attentive edge-wise space projector is designed to model the similarities between heterogeneous nodes according to local graph structures and the relation types. Moreover, to deploy AMCAD in Taobao, one of the largest ecommerce platforms with hundreds of million users, we design an efficient two-layer online retrieval framework for the task of graph based advertisement retrieval. Extensive evaluations on real-world datasets and A/B tests on online traffic are conducted to illustrate the effectiveness of the proposed system. | https://arxiv.org/abs/2203.14683v1 | https://arxiv.org/pdf/2203.14683v1.pdf | null | [
"Zhirong Xu",
"Shiyang Wen",
"Junshan Wang",
"Guojun Liu",
"Liang Wang",
"Zhi Yang",
"Lei Ding",
"Yan Zhang",
"Di Zhang",
"Jian Xu",
"Bo Zheng"
] | [
"Graph Embedding",
"Information Retrieval"
] | 1,648,425,600,000 | [] | 14,583 |
136,623 | https://paperswithcode.com/paper/restore-from-restored-single-image-denoising | 2003.04721 | Restore from Restored: Single Image Denoising with Pseudo Clean Image | In this study, we propose a simple and effective fine-tuning algorithm called "restore-from-restored", which can greatly enhance the performance of fully pre-trained image denoising networks. Many supervised denoising approaches can produce satisfactory results using large external training datasets. However, these methods have limitations in using internal information available in a given test image. By contrast, recent self-supervised approaches can remove noise in the input image by utilizing information from the specific test input. However, such methods show relatively lower performance on known noise types such as Gaussian noise compared to supervised methods. Thus, to combine external and internal information, we fine-tune the fully pre-trained denoiser using pseudo training set at test time. By exploiting internal self-similar patches (i.e., patch-recurrence), the baseline network can be adapted to the given specific input image. We demonstrate that our method can be easily employed on top of the state-of-the-art denoising networks and further improve the performance on numerous denoising benchmark datasets including real noisy images. | https://arxiv.org/abs/2003.04721v3 | https://arxiv.org/pdf/2003.04721v3.pdf | null | [
"Seunghwan Lee",
"Dongkyu Lee",
"Donghyeon Cho",
"Jiwon Kim",
"Tae Hyun Kim"
] | [
"Denoising",
"Image Denoising"
] | 1,583,712,000,000 | [] | 20,144 |
317,412 | https://paperswithcode.com/paper/tandem3d-active-tactile-exploration-for-3d | 2209.08772 | TANDEM3D: Active Tactile Exploration for 3D Object Recognition | Tactile recognition of 3D objects remains a challenging task. Compared to 2D shapes, the complex geometry of 3D surfaces requires richer tactile signals, more dexterous actions, and more advanced encoding techniques. In this work, we propose TANDEM3D, a method that applies a co-training framework for exploration and decision making to 3D object recognition with tactile signals. Starting with our previous work, which introduced a co-training paradigm for 2D recognition problems, we introduce a number of advances that enable us to scale up to 3D. TANDEM3D is based on a novel encoder that builds 3D object representation from contact positions and normals using PointNet++. Furthermore, by enabling 6DOF movement, TANDEM3D explores and collects discriminative touch information with high efficiency. Our method is trained entirely in simulation and validated with real-world experiments. Compared to state-of-the-art baselines, TANDEM3D achieves higher accuracy and a lower number of actions in recognizing 3D objects and is also shown to be more robust to different types and amounts of sensor noise. Video is available at https://jxu.ai/tandem3d. | https://arxiv.org/abs/2209.08772v1 | https://arxiv.org/pdf/2209.08772v1.pdf | null | [
"Jingxi Xu",
"Han Lin",
"Shuran Song",
"Matei Ciocarlie"
] | [
"3D Object Recognition",
"Object Recognition"
] | 1,663,545,600,000 | [] | 85,780 |
134,045 | https://paperswithcode.com/paper/neural-arbitrary-style-transfer-for-portrait | 2002.07643 | Neural arbitrary style transfer for portrait images using the attention mechanism | Arbitrary style transfer is the task of synthesis of an image that has never been seen before, using two given images: content image and style image. The content image forms the structure, the basic geometric lines and shapes of the resulting image, while the style image sets the color and texture of the result. The word "arbitrary" in this context means the absence of any one pre-learned style. So, for example, convolutional neural networks capable of transferring a new style only after training or retraining on a new amount of data are not con-sidered to solve such a problem, while networks based on the attention mech-anism that are capable of performing such a transformation without retraining - yes. An original image can be, for example, a photograph, and a style image can be a painting of a famous artist. The resulting image in this case will be the scene depicted in the original photograph, made in the stylie of this picture. Recent arbitrary style transfer algorithms make it possible to achieve good re-sults in this task, however, in processing portrait images of people, the result of such algorithms is either unacceptable due to excessive distortion of facial features, or weakly expressed, not bearing the characteristic features of a style image. In this paper, we consider an approach to solving this problem using the combined architecture of deep neural networks with a attention mechanism that transfers style based on the contents of a particular image segment: with a clear predominance of style over the form for the background part of the im-age, and with the prevalence of content over the form in the image part con-taining directly the image of a person. | https://arxiv.org/abs/2002.07643v1 | https://arxiv.org/pdf/2002.07643v1.pdf | null | [
"S. A. Berezin",
"V. M. Volkova"
] | [
"Style Transfer"
] | 1,581,897,600,000 | [] | 6,661 |
118,458 | https://paperswithcode.com/paper/remind-your-neural-network-to-prevent | 1910.02509 | REMIND Your Neural Network to Prevent Catastrophic Forgetting | People learn throughout life. However, incrementally updating conventional neural networks leads to catastrophic forgetting. A common remedy is replay, which is inspired by how the brain consolidates memory. Replay involves fine-tuning a network on a mixture of new and old instances. While there is neuroscientific evidence that the brain replays compressed memories, existing methods for convolutional networks replay raw images. Here, we propose REMIND, a brain-inspired approach that enables efficient replay with compressed representations. REMIND is trained in an online manner, meaning it learns one example at a time, which is closer to how humans learn. Under the same constraints, REMIND outperforms other methods for incremental class learning on the ImageNet ILSVRC-2012 dataset. We probe REMIND's robustness to data ordering schemes known to induce catastrophic forgetting. We demonstrate REMIND's generality by pioneering online learning for Visual Question Answering (VQA). | https://arxiv.org/abs/1910.02509v3 | https://arxiv.org/pdf/1910.02509v3.pdf | ECCV 2020 8 | [
"Tyler L. Hayes",
"Kushal Kafle",
"Robik Shrestha",
"Manoj Acharya",
"Christopher Kanan"
] | [
"online learning",
"Quantization",
"Question Answering",
"Visual Question Answering",
"Visual Question Answering"
] | 1,570,320,000,000 | [] | 21,081 |
70,580 | https://paperswithcode.com/paper/probabilistic-neural-programmed-networks-for | null | Probabilistic Neural Programmed Networks for Scene Generation | In this paper we address the text to scene image generation problem. Generative models that capture the variability in complicated scenes containing rich semantics is a grand goal of image generation. Complicated scene images contain rich visual elements, compositional visual concepts, and complicated relations between objects. Generative models, as an analysis-by-synthesis process, should encompass the following three core components: 1) the generation process that composes the scene; 2) what are the primitive visual elements and how are they composed; 3) the rendering of abstract concepts into their pixel-level realizations. We propose PNP-Net, a variational auto-encoder framework that addresses these three challenges: it flexibly composes images with a dynamic network structure, learns a set of distribution transformers that can compose distributions based on semantics, and decodes samples from these distributions into realistic images. | http://papers.nips.cc/paper/7658-probabilistic-neural-programmed-networks-for-scene-generation | http://papers.nips.cc/paper/7658-probabilistic-neural-programmed-networks-for-scene-generation.pdf | NeurIPS 2018 12 | [
"Zhiwei Deng",
"Jiacheng Chen",
"Yifang Fu",
"Greg Mori"
] | [
"Image Generation",
"Scene Generation"
] | 1,543,622,400,000 | [] | 161,611 |
241,993 | https://paperswithcode.com/paper/iwave3d-end-to-end-brain-image-compression | 2109.08942 | iWave3D: End-to-end Brain Image Compression with Trainable 3-D Wavelet Transform | With the rapid development of whole brain imaging technology, a large number of brain images have been produced, which puts forward a great demand for efficient brain image compression methods. At present, the most commonly used compression methods are all based on 3-D wavelet transform, such as JP3D. However, traditional 3-D wavelet transforms are designed manually with certain assumptions on the signal, but brain images are not as ideal as assumed. What's more, they are not directly optimized for compression task. In order to solve these problems, we propose a trainable 3-D wavelet transform based on the lifting scheme, in which the predict and update steps are replaced by 3-D convolutional neural networks. Then the proposed transform is embedded into an end-to-end compression scheme called iWave3D, which is trained with a large amount of brain images to directly minimize the rate-distortion loss. Experimental results demonstrate that our method outperforms JP3D significantly by 2.012 dB in terms of average BD-PSNR. | https://arxiv.org/abs/2109.08942v2 | https://arxiv.org/pdf/2109.08942v2.pdf | null | [
"Dongmei Xue",
"Haichuan Ma",
"Li Li",
"Dong Liu",
"Zhiwei Xiong"
] | [
"Image Compression"
] | 1,631,923,200,000 | [] | 79,335 |
15,536 | https://paperswithcode.com/paper/regret-minimization-for-partially-observable | 1710.11424 | Regret Minimization for Partially Observable Deep Reinforcement Learning | Deep reinforcement learning algorithms that estimate state and state-action
value functions have been shown to be effective in a variety of challenging
domains, including learning control strategies from raw image pixels. However,
algorithms that estimate state and state-action value functions typically
assume a fully observed state and must compensate for partial observations by
using finite length observation histories or recurrent networks. In this work,
we propose a new deep reinforcement learning algorithm based on counterfactual
regret minimization that iteratively updates an approximation to an
advantage-like function and is robust to partially observed state. We
demonstrate that this new algorithm can substantially outperform strong
baseline methods on several partially observed reinforcement learning tasks:
learning first-person 3D navigation in Doom and Minecraft, and acting in the
presence of partially observed objects in Doom and Pong. | http://arxiv.org/abs/1710.11424v2 | http://arxiv.org/pdf/1710.11424v2.pdf | ICML 2018 7 | [
"Peter Jin",
"Kurt Keutzer",
"Sergey Levine"
] | [
"reinforcement-learning"
] | 1,509,408,000,000 | [] | 119,298 |
280,575 | https://paperswithcode.com/paper/combining-reinforcement-learning-and-optimal | 2203.00903 | Combining Reinforcement Learning and Optimal Transport for the Traveling Salesman Problem | The traveling salesman problem is a fundamental combinatorial optimization problem with strong exact algorithms. However, as problems scale up, these exact algorithms fail to provide a solution in a reasonable time. To resolve this, current works look at utilizing deep learning to construct reasonable solutions. Such efforts have been very successful, but tend to be slow and compute intensive. This paper exemplifies the integration of entropic regularized optimal transport techniques as a layer in a deep reinforcement learning network. We show that we can construct a model capable of learning without supervision and inferences significantly faster than current autoregressive approaches. We also empirically evaluate the benefits of including optimal transport algorithms within deep learning models to enforce assignment constraints during end-to-end training. | https://arxiv.org/abs/2203.00903v1 | https://arxiv.org/pdf/2203.00903v1.pdf | null | [
"Yong Liang Goh",
"Wee Sun Lee",
"Xavier Bresson",
"Thomas Laurent",
"Nicholas Lim"
] | [
"Combinatorial Optimization",
"reinforcement-learning",
"Traveling Salesman Problem"
] | 1,646,179,200,000 | [] | 78,131 |
282,790 | https://paperswithcode.com/paper/bibert-accurate-fully-binarized-bert-1 | 2203.06390 | BiBERT: Accurate Fully Binarized BERT | The large pre-trained BERT has achieved remarkable performance on Natural Language Processing (NLP) tasks but is also computation and memory expensive. As one of the powerful compression approaches, binarization extremely reduces the computation and memory consumption by utilizing 1-bit parameters and bitwise operations. Unfortunately, the full binarization of BERT (i.e., 1-bit weight, embedding, and activation) usually suffer a significant performance drop, and there is rare study addressing this problem. In this paper, with the theoretical justification and empirical analysis, we identify that the severe performance drop can be mainly attributed to the information degradation and optimization direction mismatch respectively in the forward and backward propagation, and propose BiBERT, an accurate fully binarized BERT, to eliminate the performance bottlenecks. Specifically, BiBERT introduces an efficient Bi-Attention structure for maximizing representation information statistically and a Direction-Matching Distillation (DMD) scheme to optimize the full binarized BERT accurately. Extensive experiments show that BiBERT outperforms both the straightforward baseline and existing state-of-the-art quantized BERTs with ultra-low bit activations by convincing margins on the NLP benchmark. As the first fully binarized BERT, our method yields impressive 56.3 times and 31.2 times saving on FLOPs and model size, demonstrating the vast advantages and potential of the fully binarized BERT model in real-world resource-constrained scenarios. | https://arxiv.org/abs/2203.06390v1 | https://arxiv.org/pdf/2203.06390v1.pdf | ICLR 2022 4 | [
"Haotong Qin",
"Yifu Ding",
"Mingyuan Zhang",
"Qinghua Yan",
"Aishan Liu",
"Qingqing Dang",
"Ziwei Liu",
"Xianglong Liu"
] | [
"Binarization"
] | 1,647,043,200,000 | [
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "Bi-attention employs the attention-in-attention (AiA) mechanism to capture second-order statistical information: the outer point-wise channel attention vectors are computed from the output of the inner channel attention.",
"full_name": "Bilinear Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Bi-attention",
"source_title": "Bilinear Attention Networks for Person Retrieval",
"source_url": "http://openaccess.thecvf.com/content_ICCV_2019/html/Fang_Bilinear_Attention_Networks_for_Person_Retrieval_ICCV_2019_paper.html"
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)",
"full_name": "WordPiece",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "WordPiece",
"source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"source_url": "http://arxiv.org/abs/1609.08144v2"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584",
"description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.",
"full_name": "Gaussian Error Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GELU",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v4"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
}
] | 20,248 |
242,268 | https://paperswithcode.com/paper/homography-augumented-momentum-constrastive | 2109.10329 | Homography augumented momentum constrastive learning for SAR image retrieval | Deep learning-based image retrieval has been emphasized in computer vision. Representation embedding extracted by deep neural networks (DNNs) not only aims at containing semantic information of the image, but also can manage large-scale image retrieval tasks. In this work, we propose a deep learning-based image retrieval approach using homography transformation augmented contrastive learning to perform large-scale synthetic aperture radar (SAR) image search tasks. Moreover, we propose a training method for the DNNs induced by contrastive learning that does not require any labeling procedure. This may enable tractability of large-scale datasets with relative ease. Finally, we verify the performance of the proposed method by conducting experiments on the polarimetric SAR image datasets. | https://arxiv.org/abs/2109.10329v1 | https://arxiv.org/pdf/2109.10329v1.pdf | null | [
"Seonho Park",
"Maciej Rysz",
"Kathleen M. Dipple",
"Panos M. Pardalos"
] | [
"Contrastive Learning",
"Image Retrieval"
] | 1,632,182,400,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] | 31,201 |
267,610 | https://paperswithcode.com/paper/label-free-virtual-her2-immunohistochemical | 2112.05240 | Label-free virtual HER2 immunohistochemical staining of breast tissue using deep learning | The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory, and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow. | https://arxiv.org/abs/2112.05240v1 | https://arxiv.org/pdf/2112.05240v1.pdf | null | [
"Bijie Bai",
"Hongda Wang",
"Yuzhu Li",
"Kevin De Haan",
"Francesco Colonnese",
"Yujie Wan",
"Jingyi Zuo",
"Ngan B. Doan",
"Xiaoran Zhang",
"Yijie Zhang",
"Jingxi Li",
"Wenjie Dong",
"Morgan Angus Darrow",
"Elham Kamangar",
"Han Sung Lee",
"Yair Rivenson",
"Aydogan Ozcan"
] | [
"whole slide images"
] | 1,638,921,600,000 | [] | 190,977 |
258,026 | https://paperswithcode.com/paper/towards-domain-independent-and-real-time | 2111.06195 | Towards Domain-Independent and Real-Time Gesture Recognition Using mmWave Signal | Human gesture recognition using millimeter-wave (mmWave) signals provides attractive applications including smart home and in-car interfaces. While existing works achieve promising performance under controlled settings, practical applications are still limited due to the need of intensive data collection, extra training efforts when adapting to new domains, and poor performance for real-time recognition. In this paper, we propose DI-Gesture, a domain-independent and real-time mmWave gesture recognition system. Specifically, we first derive signal variations corresponding to human gestures with spatial-temporal processing. To enhance the robustness of the system and reduce data collecting efforts, we design a data augmentation framework for mmWave signals based on correlations between signal patterns and gesture variations. Furthermore, a spatial-temporal gesture segmentation algorithm is employed for real-time recognition. Extensive experimental results show DI-Gesture achieves an average accuracy of 97.92\%, 99.18\%, and 98.76\% for new users, environments, and locations, respectively. We also evaluate DI-Gesture in challenging scenarios like real-time recognition and sensing at extreme angles, all of which demonstrates the superior robustness and effectiveness of our system. | https://arxiv.org/abs/2111.06195v2 | https://arxiv.org/pdf/2111.06195v2.pdf | null | [
"Yadong Li",
"Dongheng Zhang",
"Jinbo Chen",
"Jinwei Wan",
"Dong Zhang",
"Yang Hu",
"Qibin Sun",
"Yan Chen"
] | [
"Data Augmentation",
"Gesture Recognition"
] | 1,636,588,800,000 | [] | 118,940 |
239,023 | https://paperswithcode.com/paper/rare-words-degenerate-all-words | 2109.03127 | Rare Tokens Degenerate All Tokens: Improving Neural Text Generation via Adaptive Gradient Gating for Rare Token Embeddings | Recent studies have determined that the learned token embeddings of large-scale neural language models are degenerated to be anisotropic with a narrow-cone shape. This phenomenon, called the representation degeneration problem, facilitates an increase in the overall similarity between token embeddings that negatively affect the performance of the models. Although the existing methods that address the degeneration problem based on observations of the phenomenon triggered by the problem improves the performance of the text generation, the training dynamics of token embeddings behind the degeneration problem are still not explored. In this study, we analyze the training dynamics of the token embeddings focusing on rare token embedding. We demonstrate that the specific part of the gradient for rare token embeddings is the key cause of the degeneration problem for all tokens during training stage. Based on the analysis, we propose a novel method called, adaptive gradient gating (AGG). AGG addresses the degeneration problem by gating the specific part of the gradient for rare token embeddings. Experimental results from language modeling, word similarity, and machine translation tasks quantitatively and qualitatively verify the effectiveness of AGG. | https://arxiv.org/abs/2109.03127v3 | https://arxiv.org/pdf/2109.03127v3.pdf | ACL 2022 5 | [
"Sangwon Yu",
"Jongyoon Song",
"Heeseung Kim",
"Seong-min Lee",
"Woo-Jong Ryu",
"Sungroh Yoon"
] | [
"Language Modelling",
"Machine Translation",
"Text Generation",
"Word Embeddings",
"Word Similarity"
] | 1,630,972,800,000 | [] | 113,938 |
139,001 | https://paperswithcode.com/paper/am-mobilenet1d-a-portable-model-for-speaker | 2004.00132 | AM-MobileNet1D: A Portable Model for Speaker Recognition | Speaker Recognition and Speaker Identification are challenging tasks with essential applications such as automation, authentication, and security. Deep learning approaches like SincNet and AM-SincNet presented great results on these tasks. The promising performance took these models to real-world applications that becoming fundamentally end-user driven and mostly mobile. The mobile computation requires applications with reduced storage size, non-processing and memory intensive and efficient energy-consuming. The deep learning approaches, in contrast, usually are energy expensive, demanding storage, processing power, and memory. To address this demand, we propose a portable model called Additive Margin MobileNet1D (AM-MobileNet1D) to Speaker Identification on mobile devices. We evaluated the proposed approach on TIMIT and MIT datasets obtaining equivalent or better performances concerning the baseline methods. Additionally, the proposed model takes only 11.6 megabytes on disk storage against 91.2 from SincNet and AM-SincNet architectures, making the model seven times faster, with eight times fewer parameters. | https://arxiv.org/abs/2004.00132v1 | https://arxiv.org/pdf/2004.00132v1.pdf | null | [
"João Antônio Chagas Nunes",
"David Macêdo",
"Cleber Zanchettin"
] | [
"Speaker Identification",
"Speaker Recognition"
] | 1,585,612,800,000 | [] | 19,151 |
243,168 | https://paperswithcode.com/paper/communication-efficient-distributed-linear | 2109.12400 | Communication-Efficient Distributed Linear and Deep Generalized Canonical Correlation Analysis | Classic and deep learning-based generalized canonical correlation analysis (GCCA) algorithms seek low-dimensional common representations of data entities from multiple ``views'' (e.g., audio and image) using linear transformations and neural networks, respectively. When the views are acquired and stored at different locations, organizations and edge devices, computing GCCA in a distributed, parallel and efficient manner is well-motivated. However, existing distributed GCCA algorithms may incur prohitively high communication overhead. This work puts forth a communication-efficient distributed framework for both linear and deep GCCA under the maximum variance (MAX-VAR) paradigm. The overhead issue is addressed by aggressively compressing (via quantization) the exchanging information between the distributed computing agents and a central controller. Compared to the unquantized version, the proposed algorithm consistently reduces the communication overhead by about $90\%$ with virtually no loss in accuracy and convergence speed. Rigorous convergence analyses are also presented -- which is a nontrivial effort since no existing generic result from quantized distributed optimization covers the special problem structure of GCCA. Our result shows that the proposed algorithms for both linear and deep GCCA converge to critical points in a sublinear rate, even under heavy quantization and stochastic approximations. In addition, it is shown that in the linear MAX-VAR case, the quantized algorithm approaches a {\it global optimum} in a {\it geometric} rate -- if the computing agents' updates meet a certain accuracy level. Synthetic and real data experiments are used to showcase the effectiveness of the proposed approach. | https://arxiv.org/abs/2109.12400v1 | https://arxiv.org/pdf/2109.12400v1.pdf | null | [
"Sagar Shrestha",
"Xiao Fu"
] | [
"Distributed Computing",
"Distributed Optimization",
"Quantization"
] | 1,632,528,000,000 | [] | 60,197 |
110,441 | https://paperswithcode.com/paper/memc-net-motion-estimation-and-motion-1 | null | MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Frame Interpolation and Enhancement | Motion estimation (ME) and motion compensation (MC) have dominated classical video frame interpolation systems over the past decades. Recently, the convolutional neural networks set up a new data-driven paradigm for frame interpolation. However, existing learning based methods typically fall into estimating only one of the ME and MC building blocks, resulting in a limited performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and motion compensation driven neural network for video frame interpolation. A novel adaptive warping layer is proposed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. Our method benefits from the ME and MC model-driven architecture while avoiding the conventional hand-crafted design by training on a large amount of video data. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Moreover, our MEMC architecture is a general framework, which can be seamlessly adapted to several video enhancement tasks, e.g., super-resolution, denoising, and deblocking. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art video frame interpolation and enhancement algorithms on a wide range of datasets. | https://arxiv.org/abs/1810.08768 | https://arxiv.org/pdf/1810.08768 | arXiv 2018 10 | [
"Wenbo Bao",
"Wei-Sheng Lai",
"Xiaoyun Zhang",
"Zhiyong Gao",
"Ming-Hsuan Yang"
] | [
"Denoising",
"Motion Compensation",
"Motion Estimation",
"Optical Flow Estimation",
"Super-Resolution",
"Video Enhancement",
"Video Frame Interpolation"
] | 1,539,993,600,000 | [] | 2,598 |