uid
int64 4
318k
| paper_url
stringlengths 39
81
| arxiv_id
stringlengths 9
16
⌀ | title
stringlengths 6
365
| abstract
stringlengths 0
7.27k
| url_abs
stringlengths 17
601
| url_pdf
stringlengths 21
819
| proceeding
stringlengths 7
1.03k
⌀ | authors
sequence | tasks
sequence | date
float64 422B
1,672B
⌀ | methods
list | __index_level_0__
int64 1
197k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
80,849 | https://paperswithcode.com/paper/a-comprehensive-exploration-on-wikisql-with | 1902.01069 | A Comprehensive Exploration on WikiSQL with Table-Aware Word Contextualization | We present SQLova, the first Natural-language-to-SQL (NL2SQL) model to achieve human performance in WikiSQL dataset. We revisit and discuss diverse popular methods in NL2SQL literature, take a full advantage of BERT {Devlin et al., 2018) through an effective table contextualization method, and coherently combine them, outperforming the previous state of the art by 8.2% and 2.5% in logical form and execution accuracy, respectively. We particularly note that BERT with a seq2seq decoder leads to a poor performance in the task, indicating the importance of a careful design when using such large pretrained models. We also provide a comprehensive analysis on the dataset and our model, which can be helpful for designing future NL2SQL datsets and models. We especially show that our model's performance is near the upper bound in WikiSQL, where we observe that a large portion of the evaluation errors are due to wrong annotations, and our model is already exceeding human performance by 1.3% in execution accuracy. | https://arxiv.org/abs/1902.01069v2 | https://arxiv.org/pdf/1902.01069v2.pdf | null | [
"Wonseok Hwang",
"Jinyeong Yim",
"Seunghyun Park",
"Minjoon Seo"
] | [
"Semantic Parsing"
] | 1,549,238,400,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584",
"description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.",
"full_name": "Gaussian Error Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GELU",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v4"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "",
"description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)",
"full_name": "WordPiece",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "WordPiece",
"source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"source_url": "http://arxiv.org/abs/1609.08144v2"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Seq2Seq**, or **Sequence To Sequence**, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one [LSTM](https://paperswithcode.com/method/lstm), the *encoder*, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the *decoder*, to extract the output sequence\r\nfrom that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.\r\n\r\n(Note that this page refers to the original seq2seq not general sequence-to-sequence models)",
"full_name": "Sequence to Sequence",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Sequence To Sequence Models",
"parent": null
},
"name": "Seq2Seq",
"source_title": "Sequence to Sequence Learning with Neural Networks",
"source_url": "http://arxiv.org/abs/1409.3215v3"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
}
] | 104,947 |
308,521 | https://paperswithcode.com/paper/knowledge-enhanced-black-box-attacks-for | 2207.10307 | Knowledge-enhanced Black-box Attacks for Recommendations | Recent studies have shown that deep neural networks-based recommender systems are vulnerable to adversarial attacks, where attackers can inject carefully crafted fake user profiles (i.e., a set of items that fake users have interacted with) into a target recommender system to achieve malicious purposes, such as promote or demote a set of target items. Due to the security and privacy concerns, it is more practical to perform adversarial attacks under the black-box setting, where the architecture/parameters and training data of target systems cannot be easily accessed by attackers. However, generating high-quality fake user profiles under black-box setting is rather challenging with limited resources to target systems. To address this challenge, in this work, we introduce a novel strategy by leveraging items' attribute information (i.e., items' knowledge graph), which can be publicly accessible and provide rich auxiliary knowledge to enhance the generation of fake user profiles. More specifically, we propose a knowledge graph-enhanced black-box attacking framework (KGAttack) to effectively learn attacking policies through deep reinforcement learning techniques, in which knowledge graph is seamlessly integrated into hierarchical policy networks to generate fake user profiles for performing adversarial black-box attacks. Comprehensive experiments on various real-world datasets demonstrate the effectiveness of the proposed attacking framework under the black-box setting. | https://arxiv.org/abs/2207.10307v1 | https://arxiv.org/pdf/2207.10307v1.pdf | null | [
"Jingfan Chen",
"Wenqi Fan",
"Guanghui Zhu",
"Xiangyu Zhao",
"Chunfeng Yuan",
"Qing Li",
"Yihua Huang"
] | [
"Recommendation Systems"
] | 1,658,361,600,000 | [] | 183,588 |
228,925 | https://paperswithcode.com/paper/three-birds-with-one-stone-multi-task | null | Three Birds with One Stone: Multi-Task Temporal Action Detection via Recycling Temporal Annotations | Temporal action detection on unconstrained videos has seen significant research progress in recent years. Deep learning has achieved enormous success in this direction. However, collecting large-scale temporal detection datasets to ensuring promising performance in the real-world is a laborious, impractical and time consuming process. Accordingly, we present a novel improved temporal action localization model that is better able to take advantage of limited labeled data available. Specifically, we design two auxiliary tasks by reconstructing the available label information and then facilitate the learning of the temporal action detection model. Each task generates their supervision signal by recycling the original annotations, and are jointly trained with the temporal action detection model in a multi-task learning fashion. Note that the proposed approach can be pluggable to any region proposal based temporal action detection models. We conduct extensive experiments on three benchmark datasets, namely THUMOS'14, Charades and ActivityNet. Our experimental results confirm the effectiveness of the proposed model. | http://openaccess.thecvf.com//content/CVPR2021/html/Li_Three_Birds_with_One_Stone_Multi-Task_Temporal_Action_Detection_via_CVPR_2021_paper.html | http://openaccess.thecvf.com//content/CVPR2021/papers/Li_Three_Birds_with_One_Stone_Multi-Task_Temporal_Action_Detection_via_CVPR_2021_paper.pdf | CVPR 2021 1 | [
"Zhihui Li",
"Lina Yao"
] | [
"Action Detection",
"Action Localization",
"Multi-Task Learning",
"Region Proposal",
"Temporal Action Localization"
] | 1,624,060,800,000 | [] | 101,009 |
98,363 | https://paperswithcode.com/paper/multi-interest-network-with-dynamic-routing | 1904.08030 | Multi-Interest Network with Dynamic Routing for Recommendation at Tmall | Industrial recommender systems usually consist of the matching stage and the
ranking stage, in order to handle the billion-scale of users and items. The
matching stage retrieves candidate items relevant to user interests, while the
ranking stage sorts candidate items by user interests. Thus, the most critical
ability is to model and represent user interests for either stage. Most of the
existing deep learning-based models represent one user as a single vector which
is insufficient to capture the varying nature of user's interests. In this
paper, we approach this problem from a different view, to represent one user
with multiple vectors encoding the different aspects of the user's interests.
We propose the Multi-Interest Network with Dynamic routing (MIND) for dealing
with user's diverse interests in the matching stage. Specifically, we design a
multi-interest extractor layer based on capsule routing mechanism, which is
applicable for clustering historical behaviors and extracting diverse
interests. Furthermore, we develop a technique named label-aware attention to
help learn a user representation with multiple vectors. Through extensive
experiments on several public benchmarks and one large-scale industrial dataset
from Tmall, we demonstrate that MIND can achieve superior performance than
state-of-the-art methods for recommendation. Currently, MIND has been deployed
for handling major online traffic at the homepage on Mobile Tmall App. | http://arxiv.org/abs/1904.08030v1 | http://arxiv.org/pdf/1904.08030v1.pdf | null | [
"Chao Li",
"Zhiyuan Liu",
"Mengmeng Wu",
"Yuchi Xu",
"Pipei Huang",
"Huan Zhao",
"Guoliang Kang",
"Qiwei Chen",
"Wei Li",
"Dik Lun Lee"
] | [
"Recommendation Systems"
] | 1,555,459,200,000 | [] | 128,639 |
118,540 | https://paperswithcode.com/paper/kornia-an-open-source-differentiable-computer | 1910.02190 | Kornia: an Open Source Differentiable Computer Vision Library for PyTorch | This work presents Kornia -- an open source computer vision library which consists of a set of differentiable routines and modules to solve generic computer vision problems. The package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions. Inspired by OpenCV, Kornia is composed of a set of modules containing operators that can be inserted inside neural networks to train models to perform image transformations, camera calibration, epipolar geometry, and low level image processing techniques, such as filtering and edge detection that operate directly on high dimensional tensor representations. Examples of classical vision problems implemented using our framework are provided including a benchmark comparing to existing vision libraries. | https://arxiv.org/abs/1910.02190v2 | https://arxiv.org/pdf/1910.02190v2.pdf | null | [
"Edgar Riba",
"Dmytro Mishkin",
"Daniel Ponsa",
"Ethan Rublee",
"Gary Bradski"
] | [
"Camera Calibration",
"Data Augmentation",
"Edge Detection",
"Image Augmentation",
"Image Cropping",
"Image Denoising",
"Image Manipulation",
"Image Matching",
"Image Morphing",
"Image Registration",
"Image Retrieval",
"image smoothing",
"Image Stitching"
] | 1,570,233,600,000 | [] | 84,954 |
101,710 | https://paperswithcode.com/paper/image-inpainting-by-patch-propagation-using | null | Image Inpainting by Patch Propagation Using Patch Sparsity | This paper introduces a novel examplar-based inpainting algorithm through investigating the sparsity of natural image patches. Two novel concepts of sparsity at the patch level are proposed for modeling the patch priority and patch representation, which are two crucial steps for patch propagation in the examplar-based inpainting approach. First, patch structure sparsity is designed to measure the confidence of a patch located at the image structure (e.g., the edge or corner) by the sparseness of its nonzero similarities to the neighboring patches. The patch with larger structure sparsity will be assigned higher priority for further inpainting. Second, it is assumed that the patch to be filled can be represented by the sparse linear combination of candidate patches under the local patch consistency constraint in a framework of sparse representation. Compared with the traditional examplar-based inpainting approach, structure sparsity enables better discrimination of structure and texture, and the patch sparse representation forces the newly inpainted regions to be sharp and consistent with the surrounding textures. Experiments on synthetic and natural images show the advantages of the proposed approach. | https://ieeexplore.ieee.org/document/5404308 | https://ieeexplore.ieee.org/document/5404308 | journal 2010 5 | [
"Zongben Xu",
"Jian Sun"
] | [
"Image Inpainting",
"Novel Concepts"
] | 1,272,672,000,000 | [] | 120,170 |
41,495 | https://paperswithcode.com/paper/fitnets-hints-for-thin-deep-nets | 1412.6550 | FitNets: Hints for Thin Deep Nets | While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network. | https://arxiv.org/abs/1412.6550v4 | https://arxiv.org/pdf/1412.6550v4.pdf | null | [
"Adriana Romero",
"Nicolas Ballas",
"Samira Ebrahimi Kahou",
"Antoine Chassang",
"Carlo Gatta",
"Yoshua Bengio"
] | [
"Knowledge Distillation"
] | 1,418,947,200,000 | [] | 28,281 |
167,022 | https://paperswithcode.com/paper/a-flexible-framework-for-discovering-novel | null | A Flexible Framework for Discovering Novel Categories with Contrastive Learning | This paper studies the problem of novel category discovery on single- and multi-modal data with labels from different but relevant categories. We present a generic, end-to-end framework to jointly learn a reliable representation and assign clusters to unlabelled data. To avoid over-fitting the learnt embedding to labelled data, we take inspiration from self-supervised representation learning by noise-contrastive estimation and extend it to jointly handle labelled and unlabelled data. In particular, we proposed using category discrimination on labelled data and cross-modal discrimination on multi-modal data to augment instance discrimination used in conventional contrastive learning approaches. We further introduce Winner-Take-All (WTA) hashing algorithm on the shared representation space to generate pairwise pseudo labels for unlabelled data to better predict cluster assignments. We thoroughly evaluate our framework on large-scale multi-modal video benchmarks Kinetics-400 and VGG-Sound, and image benchmarks CIFAR10, CIFAR100 and ImageNet, obtaining state-of-the-art results. | https://openreview.net/forum?id=RpprvYz0xTM | https://openreview.net/pdf?id=RpprvYz0xTM | null | [
"Xuhui Jia",
"Kai Han",
"Yukun Zhu",
"Bradley Green"
] | [
"Contrastive Learning",
"Representation Learning"
] | 1,609,459,200,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] | 143,906 |
169,782 | https://paperswithcode.com/paper/asymptotic-randomised-control-with | 2010.07252 | Asymptotic Randomised Control with applications to bandits | We consider a general multi-armed bandit problem with correlated (and simple contextual and restless) elements, as a relaxed control problem. By introducing an entropy regularisation, we obtain a smooth asymptotic approximation to the value function. This yields a novel semi-index approximation of the optimal decision process. This semi-index can be interpreted as explicitly balancing an exploration-exploitation trade-off as in the optimistic (UCB) principle where the learning premium explicitly describes asymmetry of information available in the environment and non-linearity in the reward function. Performance of the resulting Asymptotic Randomised Control (ARC) algorithm compares favourably well with other approaches to correlated multi-armed bandits. | https://arxiv.org/abs/2010.07252v2 | https://arxiv.org/pdf/2010.07252v2.pdf | null | [
"Samuel N. Cohen",
"Tanut Treetanthiploet"
] | [
"Multi-Armed Bandits"
] | 1,602,633,600,000 | [] | 61,452 |
283,541 | https://paperswithcode.com/paper/provable-adversarial-robustness-for | 2203.08945 | Provable Adversarial Robustness for Fractional Lp Threat Models | In recent years, researchers have extensively studied adversarial robustness in a variety of threat models, including L_0, L_1, L_2, and L_infinity-norm bounded adversarial attacks. However, attacks bounded by fractional L_p "norms" (quasi-norms defined by the L_p distance with 0<p<1) have yet to be thoroughly considered. We proactively propose a defense with several desirable properties: it provides provable (certified) robustness, scales to ImageNet, and yields deterministic (rather than high-probability) certified guarantees when applied to quantized data (e.g., images). Our technique for fractional L_p robustness constructs expressive, deep classifiers that are globally Lipschitz with respect to the L_p^p metric, for any 0<p<1. However, our method is even more general: we can construct classifiers which are globally Lipschitz with respect to any metric defined as the sum of concave functions of components. Our approach builds on a recent work, Levine and Feizi (2021), which provides a provable defense against L_1 attacks. However, we demonstrate that our proposed guarantees are highly non-vacuous, compared to the trivial solution of using (Levine and Feizi, 2021) directly and applying norm inequalities. Code is available at https://github.com/alevine0/fractionalLpRobustness. | https://arxiv.org/abs/2203.08945v1 | https://arxiv.org/pdf/2203.08945v1.pdf | null | [
"Alexander Levine",
"Soheil Feizi"
] | [
"Adversarial Robustness"
] | 1,647,388,800,000 | [] | 93,083 |
10,722 | https://paperswithcode.com/paper/sketched-subspace-clustering | 1707.07196 | Sketched Subspace Clustering | The immense amount of daily generated and communicated data presents unique
challenges in their processing. Clustering, the grouping of data without the
presence of ground-truth labels, is an important tool for drawing inferences
from data. Subspace clustering (SC) is a relatively recent method that is able
to successfully classify nonlinearly separable data in a multitude of settings.
In spite of their high clustering accuracy, SC methods incur prohibitively high
computational complexity when processing large volumes of high-dimensional
data. Inspired by random sketching approaches for dimensionality reduction, the
present paper introduces a randomized scheme for SC, termed Sketch-SC, tailored
for large volumes of high-dimensional data. Sketch-SC accelerates the
computationally heavy parts of state-of-the-art SC approaches by compressing
the data matrix across both dimensions using random projections, thus enabling
fast and accurate large-scale SC. Performance analysis as well as extensive
numerical tests on real data corroborate the potential of Sketch-SC and its
competitive performance relative to state-of-the-art scalable SC approaches. | http://arxiv.org/abs/1707.07196v2 | http://arxiv.org/pdf/1707.07196v2.pdf | null | [
"Panagiotis A. Traganitis",
"Georgios B. Giannakis"
] | [
"Dimensionality Reduction"
] | 1,500,681,600,000 | [] | 188,398 |
261,166 | https://paperswithcode.com/paper/modeling-dynamics-of-biological-systems-with-1 | null | Modeling Dynamics of Biological Systems with Deep Generative Neural Networks | Biological data often contains measurements of dynamic entities such as cells or organisms in various states of progression. However, biological systems are notoriously difficult to describe analytically due to their many interacting components, and in many cases, the technical challenge of taking longitudinal measurements. This leads to difficulties in studying the features of the dynamics, for examples the drivers of the transition. To address this problem, we present a deep neural network framework we call Dynamics Modeling Network or DyMoN. DyMoN is a neural network framework trained as a deep generative Markov model whose next state is a probability distribution based on the current state. DyMoN is well-suited to the idiosyncrasies of biological data, including noise, sparsity, and the lack of longitudinal measurements in many types of systems. Thus, DyMoN can be trained using probability distributions derived from the data in any way, such as trajectories derived via dimensionality reduction methods, and does not require longitudinal measurements. We show the advantage of learning deep models over shallow models such as Kalman filters and hidden Markov models that do not learn representations of the data, both in terms of learning embeddings of the data and also in terms training efficiency, accuracy and ability to multitask. We perform three case studies of applying DyMoN to different types of biological systems and extracting features of the dynamics in each case by examining the learned model. | https://openreview.net/forum?id=S1gARiAcFm | https://openreview.net/pdf?id=S1gARiAcFm | null | [
"Scott Gigante",
"David van Dijk",
"Kevin R. Moon",
"Alexander Strzalkowski",
"Katie Ferguson",
"Guy Wolf",
"Smita Krishnaswamy"
] | [
"Dimensionality Reduction"
] | 1,538,006,400,000 | [] | 161,920 |
288,834 | https://paperswithcode.com/paper/joint-forecasting-of-panoptic-segmentations | 2204.07157 | Joint Forecasting of Panoptic Segmentations with Difference Attention | Forecasting of a representation is important for safe and effective autonomy. For this, panoptic segmentations have been studied as a compelling representation in recent work. However, recent state-of-the-art on panoptic segmentation forecasting suffers from two issues: first, individual object instances are treated independently of each other; second, individual object instance forecasts are merged in a heuristic manner. To address both issues, we study a new panoptic segmentation forecasting model that jointly forecasts all object instances in a scene using a transformer model based on 'difference attention.' It further refines the predictions by taking depth estimates into account. We evaluate the proposed model on the Cityscapes and AIODrive datasets. We find difference attention to be particularly suitable for forecasting because the difference of quantities like locations enables a model to explicitly reason about velocities and acceleration. Because of this, we attain state-of-the-art on panoptic segmentation forecasting metrics. | https://arxiv.org/abs/2204.07157v1 | https://arxiv.org/pdf/2204.07157v1.pdf | CVPR 2022 1 | [
"Colin Graber",
"Cyril Jazra",
"Wenjie Luo",
"LiangYan Gui",
"Alexander Schwing"
] | [
"Panoptic Segmentation"
] | 1,649,894,400,000 | [] | 34,036 |
150,446 | https://paperswithcode.com/paper/generative-semantic-hashing-enhanced-via | 2006.08858 | Generative Semantic Hashing Enhanced via Boltzmann Machines | Generative semantic hashing is a promising technique for large-scale information retrieval thanks to its fast retrieval speed and small memory footprint. For the tractability of training, existing generative-hashing methods mostly assume a factorized form for the posterior distribution, enforcing independence among the bits of hash codes. From the perspectives of both model representation and code space size, independence is always not the best assumption. In this paper, to introduce correlations among the bits of hash codes, we propose to employ the distribution of Boltzmann machine as the variational posterior. To address the intractability issue of training, we first develop an approximate method to reparameterize the distribution of a Boltzmann machine by augmenting it as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution. Based on that, an asymptotically-exact lower bound is further derived for the evidence lower bound (ELBO). With these novel techniques, the entire model can be optimized efficiently. Extensive experimental results demonstrate that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains. | https://arxiv.org/abs/2006.08858v1 | https://arxiv.org/pdf/2006.08858v1.pdf | ACL 2020 6 | [
"Lin Zheng",
"Qinliang Su",
"Dinghan Shen",
"Changyou Chen"
] | [
"Information Retrieval"
] | 1,592,265,600,000 | [] | 178,612 |
140,529 | https://paperswithcode.com/paper/learning-from-aggregate-observations | 2004.06316 | Learning from Aggregate Observations | We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals. A well-known example is multiple instance learning (MIL). In this paper, we extend MIL beyond binary classification to other problems such as multiclass classification and regression. We present a general probabilistic framework that accommodates a variety of aggregate observations, e.g., pairwise similarity/triplet comparison for classification and mean/difference/rank observation for regression. Simple maximum likelihood solutions can be applied to various differentiable models such as deep neural networks and gradient boosting machines. Moreover, we develop the concept of consistency up to an equivalence relation to characterize our estimator and show that it has nice convergence properties under mild assumptions. Experiments on three problem settings -- classification via triplet comparison and regression via mean/rank observation indicate the effectiveness of the proposed method. | https://arxiv.org/abs/2004.06316v3 | https://arxiv.org/pdf/2004.06316v3.pdf | NeurIPS 2020 12 | [
"Yivan Zhang",
"Nontawat Charoenphakdee",
"Zhenguo Wu",
"Masashi Sugiyama"
] | [
"Classification",
"Classification",
"Multiple Instance Learning"
] | 1,586,822,400,000 | [] | 127,679 |
61,105 | https://paperswithcode.com/paper/making-classification-competitive-for-deep | 1811.12649 | Classification is a Strong Baseline for Deep Metric Learning | Deep metric learning aims to learn a function mapping image pixels to embedding feature vectors that model the similarity between images. Two major applications of metric learning are content-based image retrieval and face verification. For the retrieval tasks, the majority of current state-of-the-art (SOTA) approaches are triplet-based non-parametric training. For the face verification tasks, however, recent SOTA approaches have adopted classification-based parametric training. In this paper, we look into the effectiveness of classification based approaches on image retrieval datasets. We evaluate on several standard retrieval datasets such as CAR-196, CUB-200-2011, Stanford Online Product, and In-Shop datasets for image retrieval and clustering, and establish that our classification-based approach is competitive across different feature dimensions and base feature networks. We further provide insights into the performance effects of subsampling classes for scalable classification-based training, and the effects of binarization, enabling efficient storage and computation for practical applications. | https://arxiv.org/abs/1811.12649v2 | https://arxiv.org/pdf/1811.12649v2.pdf | null | [
"Andrew Zhai",
"Hao-Yu Wu"
] | [
"Binarization",
"Classification",
"Content-Based Image Retrieval",
"Face Verification",
"Classification",
"Image Retrieval",
"Metric Learning"
] | 1,543,536,000,000 | [
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://www.healthnutra.org/es/maxup/",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/resnet.py#L124",
"description": "**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}(x):=\\mathcal{H}(x)-x$. The original mapping is recast into $\\mathcal{F}(x)+x$.\r\n\r\nThere is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.",
"full_name": "Residual Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "ResNet",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] | 108,640 |
229,396 | https://paperswithcode.com/paper/meta-learning-for-relative-density-ratio | 2107.00801 | Meta-Learning for Relative Density-Ratio Estimation | The ratio of two probability densities, called a density-ratio, is a vital quantity in machine learning. In particular, a relative density-ratio, which is a bounded extension of the density-ratio, has received much attention due to its stability and has been used in various applications such as outlier detection and dataset comparison. Existing methods for (relative) density-ratio estimation (DRE) require many instances from both densities. However, sufficient instances are often unavailable in practice. In this paper, we propose a meta-learning method for relative DRE, which estimates the relative density-ratio from a few instances by using knowledge in related datasets. Specifically, given two datasets that consist of a few instances, our model extracts the datasets' information by using neural networks and uses it to obtain instance embeddings appropriate for the relative DRE. We model the relative density-ratio by a linear model on the embedded space, whose global optimum solution can be obtained as a closed-form solution. The closed-form solution enables fast and effective adaptation to a few instances, and its differentiability enables us to train our model such that the expected test error for relative DRE can be explicitly minimized after adapting to a few instances. We empirically demonstrate the effectiveness of the proposed method by using three problems: relative DRE, dataset comparison, and outlier detection. | https://arxiv.org/abs/2107.00801v1 | https://arxiv.org/pdf/2107.00801v1.pdf | NeurIPS 2021 12 | [
"Atsutoshi Kumagai",
"Tomoharu Iwata",
"Yasuhiro Fujiwara"
] | [
"Density Ratio Estimation",
"Meta-Learning",
"Outlier Detection"
] | 1,625,184,000,000 | [] | 102,495 |
187,850 | https://paperswithcode.com/paper/optimal-controller-and-quantizer-selection | 1909.13609 | Optimal Controller and Quantizer Selection for Partially Observable Linear-Quadratic-Gaussian Systems | In networked control systems, often the sensory signals are quantized before being transmitted to the controller. Consequently, performance is affected by the coarseness of this quantization process. Modern communication technologies allow users to obtain resolution-varying quantized measurements based on the prices paid. In this paper, we consider joint optimal controller synthesis and quantizer scheduling for a partially observed Quantized-Feedback Linear-Quadratic-Gaussian (QF-LQG) system, where the measurements are quantized before being sent to the controller. The system is presented with several choices of quantizers, along with the cost of using each quantizer. The objective is to jointly select the quantizers and synthesize the controller to strike an optimal balance between control performance and quantization cost. When the innovation signal is quantized instead of the measurement, the problem is decoupled into two optimization problems: one for optimal controller synthesis, and the other for optimal quantizer selection. The optimal controller is found by solving a Riccati equation and the optimal quantizer selection policy is found by solving a linear program (LP)- both of which can be solved offline. | https://arxiv.org/abs/1909.13609v2 | https://arxiv.org/pdf/1909.13609v2.pdf | null | [
"Dipankar Maity",
"Panagiotis Tsiotras"
] | [
"Quantization"
] | 1,569,801,600,000 | [] | 48,090 |
31,521 | https://paperswithcode.com/paper/a-thorough-examination-of-the-cnndaily-mail | 1606.02858 | A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task | Enabling a computer to understand a document so that it can answer
comprehension questions is a central, yet unsolved goal of NLP. A key factor
impeding its solution by machine learned systems is the limited availability of
human-annotated data. Hermann et al. (2015) seek to solve this problem by
creating over a million training examples by pairing CNN and Daily Mail news
articles with their summarized bullet points, and show that a neural network
can then be trained to give good performance on this task. In this paper, we
conduct a thorough examination of this new reading comprehension task. Our
primary aim is to understand what depth of language understanding is required
to do well on this task. We approach this from one side by doing a careful
hand-analysis of a small subset of the problems and from the other by showing
that simple, carefully designed systems can obtain accuracies of 73.6% and
76.6% on these two datasets, exceeding current state-of-the-art results by
7-10% and approaching what we believe is the ceiling for performance on this
task. | http://arxiv.org/abs/1606.02858v2 | http://arxiv.org/pdf/1606.02858v2.pdf | ACL 2016 8 | [
"Danqi Chen",
"Jason Bolton",
"Christopher D. Manning"
] | [
"Reading Comprehension"
] | 1,465,430,400,000 | [] | 1,394 |
143,045 | https://paperswithcode.com/paper/multiqt-multimodal-learning-for-real-time | 2005.00812 | MultiQT: Multimodal Learning for Real-Time Question Tracking in Speech | We address a challenging and practical task of labeling questions in speech in real time during telephone calls to emergency medical services in English, which embeds within a broader decision support system for emergency call-takers. We propose a novel multimodal approach to real-time sequence labeling in speech. Our model treats speech and its own textual representation as two separate modalities or views, as it jointly learns from streamed audio and its noisy transcription into text via automatic speech recognition. Our results show significant gains of jointly learning from the two modalities when compared to text or audio only, under adverse noise and limited volume of training data. The results generalize to medical symptoms detection where we observe a similar pattern of improvements with multimodal learning. | https://arxiv.org/abs/2005.00812v2 | https://arxiv.org/pdf/2005.00812v2.pdf | ACL 2020 6 | [
"Jakob D. Havtorn",
"Jan Latko",
"Joakim Edin",
"Lasse Borgholt",
"Lars Maaløe",
"Lorenzo Belgrano",
"Nicolai F. Jacobsen",
"Regitze Sdun",
"Željko Agić"
] | [
"Automatic Speech Recognition",
"Speech Recognition",
"Speech Recognition"
] | 1,588,377,600,000 | [] | 169,094 |
67,432 | https://paperswithcode.com/paper/comparison-of-smt-and-nmt-trained-with-large | null | Comparison of SMT and NMT trained with large Patent Corpora: Japio at WAT2017 | Japio participates in patent subtasks (JPC-EJ/JE/CJ/KJ) with phrase-based statistical machine translation (SMT) and neural machine translation (NMT) systems which are trained with its own patent corpora in addition to the subtask corpora provided by organizers of WAT2017. In EJ and CJ subtasks, SMT and NMT systems whose sizes of training corpora are about 50 million and 10 million sentence pairs respectively achieved comparable scores for automatic evaluations, but NMT systems were superior to SMT systems for both official and in-house human evaluations. | https://aclanthology.org/W17-5713 | https://aclanthology.org/W17-5713.pdf | WS 2017 11 | [
"Satoshi Kinoshita",
"Tadaaki Oshio",
"Tomoharu Mitsuhashi"
] | [
"Information Retrieval",
"Machine Translation"
] | 1,509,494,400,000 | [] | 70,693 |
296,520 | https://paperswithcode.com/paper/bolt-fused-window-transformers-for-fmri-time | 2205.11578 | BolT: Fused Window Transformers for fMRI Time Series Analysis | Deep-learning models have enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) data. Yet, many previous methods are suboptimally sensitive for contextual representations across diverse time scales. Here, we present BolT, a blood-oxygen-level-dependent transformer model, for analyzing multi-variate fMRI time series. BolT leverages a cascade of transformer encoders equipped with a novel fused window attention mechanism. Encoding is performed on temporally-overlapped windows within the time series to capture local representations. To integrate information temporally, cross-window attention is computed between base tokens in each window and fringe tokens from neighboring windows. To gradually transition from local to global representations, the extent of window overlap and thereby number of fringe tokens are progressively increased across the cascade. Finally, a novel cross-window regularization is employed to align high-level classification features across the time series. Comprehensive experiments on large-scale public datasets demonstrate the superior performance of BolT against state-of-the-art methods. Furthermore, explanatory analyses to identify landmark time points and regions that contribute most significantly to model decisions corroborate prominent neuroscientific findings in the literature. | https://arxiv.org/abs/2205.11578v2 | https://arxiv.org/pdf/2205.11578v2.pdf | null | [
"Hasan Atakan Bedel",
"Irmak Şıvgın",
"Onat Dalmaz",
"Salman Ul Hassan Dar",
"Tolga Çukur"
] | [
"Time Series",
"Time Series Analysis"
] | 1,653,264,000,000 | [
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
},
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
}
] | 87,982 |
140,488 | https://paperswithcode.com/paper/unsupervised-performance-analysis-of-3d-face | 2004.06550 | Unsupervised Performance Analysis of 3D Face Alignment | We address the problem of analyzing the performance of 3D face alignment (3DFA) algorithms. Traditionally, performance analysis relies on carefully annotated datasets. Here, these annotations correspond to the 3D coordinates of a set of pre-defined facial landmarks. However, this annotation process, be it manual or automatic, is rarely error-free, which strongly biases the analysis. In contrast, we propose a fully unsupervised methodology based on robust statistics and a parametric confidence test. We revisit the problem of robust estimation of the rigid transformation between two point sets and we describe two algorithms, one based on a mixture between a Gaussian and a uniform distribution, and another one based on the generalized Student's t-distribution. We show that these methods are robust to up to 50% outliers, which makes them suitable for mapping a face, from an unknown pose to a frontal pose, in the presence of facial expressions and occlusions. Using these methods in conjunction with large datasets of face images, we build a statistical frontal facial model and an associated parametric confidence metric, eventually used for performance analysis. We empirically show that the proposed pipeline is neither method-biased nor data-biased, and that it can be used to assess both the performance of 3DFA algorithms and the accuracy of annotations of face datasets. | https://arxiv.org/abs/2004.06550v5 | https://arxiv.org/pdf/2004.06550v5.pdf | null | [
"Mostafa Sadeghi",
"Sylvain Guy",
"Adrien Raison",
"Xavier Alameda-Pineda",
"Radu Horaud"
] | [
"3D Face Alignment",
"Face Alignment"
] | 1,586,822,400,000 | [] | 56,171 |
55,564 | https://paperswithcode.com/paper/nonconvex-demixing-from-bilinear-measurements | 1809.06796 | Nonconvex Demixing From Bilinear Measurements | We consider the problem of demixing a sequence of source signals from the sum
of noisy bilinear measurements. It is a generalized mathematical model for
blind demixing with blind deconvolution, which is prevalent across the areas of
dictionary learning, image processing, and communications. However, state-of-
the-art convex methods for blind demixing via semidefinite programming are
computationally infeasible for large-scale problems. Although the existing
nonconvex algorithms are able to address the scaling issue, they normally
require proper regularization to establish optimality guarantees. The
additional regularization yields tedious algorithmic parameters and pessimistic
convergence rates with conservative step sizes. To address the limitations of
existing methods, we thus develop a provable nonconvex demixing procedure
viaWirtinger flow, much like vanilla gradient descent, to harness the benefits
of regularization-free fast convergence rate with aggressive step size and
computational optimality guarantees. This is achieved by exploiting the benign
geometry of the blind demixing problem, thereby revealing that Wirtinger flow
enforces the regularization-free iterates in the region of strong convexity and
qualified level of smoothness, where the step size can be chosen aggressively. | http://arxiv.org/abs/1809.06796v2 | http://arxiv.org/pdf/1809.06796v2.pdf | null | [
"Jialin Dong",
"Yuanming Shi"
] | [
"Dictionary Learning"
] | 1,537,228,800,000 | [] | 33,121 |
62,873 | https://paperswithcode.com/paper/marginal-likelihood-training-of-bilstm-crf | null | Marginal Likelihood Training of BiLSTM-CRF for Biomedical Named Entity Recognition from Disjoint Label Sets | Extracting typed entity mentions from text is a fundamental component to language understanding and reasoning. While there exist substantial labeled text datasets for multiple subsets of biomedical entity types{---}such as genes and proteins, or chemicals and diseases{---}it is rare to find large labeled datasets containing labels for all desired entity types together. This paper presents a method for training a single CRF extractor from multiple datasets with disjoint or partially overlapping sets of entity types. Our approach employs marginal likelihood training to insist on labels that are present in the data, while filling in {``}missing labels{''}. This allows us to leverage all the available data within a single model. In experimental results on the Biocreative V CDR (chemicals/diseases), Biocreative VI ChemProt (chemicals/proteins) and MedMentions (19 entity types) datasets, we show that joint training on multiple datasets improves NER F1 over training in isolation, and our methods achieve state-of-the-art results. | https://aclanthology.org/D18-1306 | https://aclanthology.org/D18-1306.pdf | EMNLP 2018 10 | [
"Nathan Greenberg",
"Trapit Bansal",
"Patrick Verga",
"Andrew McCallum"
] | [
"Named Entity Recognition",
"Named Entity Recognition",
"Named Entity Recognition",
"Question Answering"
] | 1,538,352,000,000 | [
{
"code_snippet_url": null,
"description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linear chain CRFs are popular in natural language processing, whereas in image-based tasks, the graph would connect to neighboring locations in an image to enforce that they have similar predictions.\r\n\r\nImage Credit: [Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields](https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf)",
"full_name": "Conditional Random Field",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Structured Prediction** methods deal with structured outputs with multiple interdependent outputs. Below you can find a continuously updating list of structured prediction methods.",
"name": "Structured Prediction",
"parent": null
},
"name": "CRF",
"source_title": null,
"source_url": null
}
] | 80,457 |
299,701 | https://paperswithcode.com/paper/tribyol-triplet-byol-for-self-supervised | 2206.03012 | TriBYOL: Triplet BYOL for Self-Supervised Representation Learning | This paper proposes a novel self-supervised learning method for learning better representations with small batch sizes. Many self-supervised learning methods based on certain forms of the siamese network have emerged and received significant attention. However, these methods need to use large batch sizes to learn good representations and require heavy computational resources. We present a new triplet network combined with a triple-view loss to improve the performance of self-supervised representation learning with small batch sizes. Experimental results show that our method can drastically outperform state-of-the-art self-supervised learning methods on several datasets in small-batch cases. Our method provides a feasible solution for self-supervised learning with real-world high-resolution images that uses small batch sizes. | https://arxiv.org/abs/2206.03012v1 | https://arxiv.org/pdf/2206.03012v1.pdf | null | [
"Guang Li",
"Ren Togo",
"Takahiro Ogawa",
"Miki Haseyama"
] | [
"Representation Learning",
"Self-Supervised Learning"
] | 1,654,560,000,000 | [
{
"code_snippet_url": null,
"description": "A **Siamese Network** consists of twin networks which accept distinct inputs but are joined by an energy function at the top. This function computes a metric between the highest level feature representation on each side. The parameters between the twin networks are tied. [Weight tying](https://paperswithcode.com/method/weight-tying) guarantees that two extremely similar images are not mapped by each network to very different locations in feature space because each network computes the same function. The network is symmetric, so that whenever we present two distinct images to the twin networks, the top conjoining layer will compute the same metric as if we were to we present the same two images but to the opposite twins.\r\n\r\nIntuitively instead of trying to classify inputs, a siamese network learns to differentiate between inputs, learning their similarity. The loss function used is usually a form of contrastive loss.\r\n\r\nSource: [Koch et al](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf)",
"full_name": "Siamese Network",
"introduced_year": 1993,
"main_collection": {
"area": "General",
"description": "**Twin Networks** are a type of neural network architecture where we use two of the same network architecture to perform a task. For example, Siamese Networks are used to learn representations that differentiate between inputs (learning their similarity). Below you can find a continuously updating list of twin network architectures.",
"name": "Twin Networks",
"parent": null
},
"name": "Siamese Network",
"source_title": null,
"source_url": null
}
] | 134,228 |
245,965 | https://paperswithcode.com/paper/cross-speaker-emotion-transfer-based-on | 2110.04153 | Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech | In expressive speech synthesis, there are high requirements for emotion interpretation. However, it is time-consuming to acquire emotional audio corpus for arbitrary speakers due to their deduction ability. In response to this problem, this paper proposes a cross-speaker emotion transfer method that can realize the transfer of emotions from source speaker to target speaker. A set of emotion tokens is firstly defined to represent various categories of emotions. They are trained to be highly correlated with corresponding emotions for controllable synthesis by cross-entropy loss and semi-supervised training strategy. Meanwhile, to eliminate the down-gradation to the timbre similarity from cross-speaker emotion transfer, speaker condition layer normalization is implemented to model speaker characteristics. Experimental results show that the proposed method outperforms the multi-reference based baseline in terms of timbre similarity, stability and emotion perceive evaluations. | https://arxiv.org/abs/2110.04153v2 | https://arxiv.org/pdf/2110.04153v2.pdf | null | [
"Pengfei Wu",
"Junjie Pan",
"Chenchang Xu",
"Junhui Zhang",
"Lin Wu",
"Xiang Yin",
"Zejun Ma"
] | [
"Expressive Speech Synthesis",
"Speech Synthesis"
] | 1,633,651,200,000 | [
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
}
] | 139,353 |
7,369 | https://paperswithcode.com/paper/learning-less-is-more-6d-camera-localization | 1711.10228 | Learning Less is More - 6D Camera Localization via 3D Surface Regression | Popular research areas like autonomous driving and augmented reality have
renewed the interest in image-based camera localization. In this work, we
address the task of predicting the 6D camera pose from a single RGB image in a
given 3D environment. With the advent of neural networks, previous works have
either learned the entire camera localization process, or multiple components
of a camera localization pipeline. Our key contribution is to demonstrate and
explain that learning a single component of this pipeline is sufficient. This
component is a fully convolutional neural network for densely regressing
so-called scene coordinates, defining the correspondence between the input
image and the 3D scene space. The neural network is prepended to a new
end-to-end trainable pipeline. Our system is efficient, highly accurate, robust
in training, and exhibits outstanding generalization capabilities. It exceeds
state-of-the-art consistently on indoor and outdoor datasets. Interestingly,
our approach surpasses existing techniques even without utilizing a 3D model of
the scene during training, since the network is able to discover 3D scene
geometry automatically, solely from single-view constraints. | http://arxiv.org/abs/1711.10228v2 | http://arxiv.org/pdf/1711.10228v2.pdf | CVPR 2018 6 | [
"Eric Brachmann",
"Carsten Rother"
] | [
"Camera Localization",
"Visual Localization"
] | 1,511,827,200,000 | [] | 194,062 |
140,841 | https://paperswithcode.com/paper/natural-language-processing-with-deep | 2004.08333 | Natural Language Processing with Deep Learning for Medical Adverse Event Detection from Free-Text Medical Narratives: A Case Study of Detecting Total Hip Replacement Dislocation | Accurate and timely detection of medical adverse events (AEs) from free-text medical narratives is challenging. Natural language processing (NLP) with deep learning has already shown great potential for analyzing free-text data, but its application for medical AE detection has been limited. In this study we proposed deep learning based NLP (DL-NLP) models for efficient and accurate hip dislocation AE detection following total hip replacement from standard (radiology notes) and non-standard (follow-up telephone notes) free-text medical narratives. We benchmarked these proposed models with a wide variety of traditional machine learning based NLP (ML-NLP) models, and also assessed the accuracy of International Classification of Diseases (ICD) and Current Procedural Terminology (CPT) codes in capturing these hip dislocation AEs in a multi-center orthopaedic registry. All DL-NLP models out-performed all of the ML-NLP models, with a convolutional neural network (CNN) model achieving the best overall performance (Kappa = 0.97 for radiology notes, and Kappa = 1.00 for follow-up telephone notes). On the other hand, the ICD/CPT codes of the patients who sustained a hip dislocation AE were only 75.24% accurate, showing the potential of the proposed model to be used in largescale orthopaedic registries for accurate and efficient hip dislocation AE detection to improve the quality of care and patient outcome. | https://arxiv.org/abs/2004.08333v2 | https://arxiv.org/pdf/2004.08333v2.pdf | null | [
"Alireza Borjali",
"Martin Magneli",
"David Shin",
"Henrik Malchau",
"Orhun K. Muratoglu",
"Kartik M. Varadarajan"
] | [
"Event Detection"
] | 1,587,081,600,000 | [
{
"code_snippet_url": "",
"description": "An **autoencoder** is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. \r\n\r\nExtracted from: [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder)\r\n\r\nImage source: [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)",
"full_name": "Autoencoders",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "AE",
"source_title": null,
"source_url": null
}
] | 159,220 |
308,778 | https://paperswithcode.com/paper/adaptive-graph-based-feature-normalization | 2207.11123 | Adaptive Graph-Based Feature Normalization for Facial Expression Recognition | Facial Expression Recognition (FER) suffers from data uncertainties caused by ambiguous facial images and annotators' subjectiveness, resulting in excursive semantic and feature covariate shifting problem. Existing works usually correct mislabeled data by estimating noise distribution, or guide network training with knowledge learned from clean data, neglecting the associative relations of expressions. In this work, we propose an Adaptive Graph-based Feature Normalization (AGFN) method to protect FER models from data uncertainties by normalizing feature distributions with the association of expressions. Specifically, we propose a Poisson graph generator to adaptively construct topological graphs for samples in each mini-batches via a sampling process, and correspondingly design a coordinate descent strategy to optimize proposed network. Our method outperforms state-of-the-art works with accuracies of 91.84% and 91.11% on the benchmark datasets FERPlus and RAF-DB, respectively, and when the percentage of mislabeled data increases (e.g., to 20%), our network surpasses existing works significantly by 3.38% and 4.52%. | https://arxiv.org/abs/2207.11123v1 | https://arxiv.org/pdf/2207.11123v1.pdf | null | [
"Yangtao Du",
"Qingqing Wang",
"Yujie Xiong"
] | [
"Facial Expression Recognition"
] | 1,658,448,000,000 | [] | 3,257 |
260,564 | https://paperswithcode.com/paper/network-in-graph-neural-network | 2111.11638 | Network In Graph Neural Network | Graph Neural Networks (GNNs) have shown success in learning from graph structured data containing node/edge feature information, with application to social networks, recommendation, fraud detection and knowledge graph reasoning. In this regard, various strategies have been proposed in the past to improve the expressiveness of GNNs. For example, one straightforward option is to simply increase the parameter size by either expanding the hid-den dimension or increasing the number of GNN layers. However, wider hidden layers can easily lead to overfitting, and incrementally adding more GNN layers can potentially result in over-smoothing.In this paper, we present a model-agnostic methodology, namely Network In Graph Neural Network (NGNN ), that allows arbitrary GNN models to increase their model capacity by making the model deeper. However, instead of adding or widening GNN layers, NGNN deepens a GNN model by inserting non-linear feedforward neural network layer(s) within each GNN layer. An analysis of NGNN as applied to a GraphSage base GNN on ogbn-products data demonstrate that it can keep the model stable against either node feature or graph structure perturbations. Furthermore, wide-ranging evaluation results on both node classification and link prediction tasks show that NGNN works reliably across diverse GNN architectures.For instance, it improves the test accuracy of GraphSage on the ogbn-products by 1.6% and improves the hits@100 score of SEAL on ogbl-ppa by 7.08% and the hits@20 score of GraphSage+Edge-Attr on ogbl-ppi by 6.22%. And at the time of this submission, it achieved two first places on the OGB link prediction leaderboard. | https://arxiv.org/abs/2111.11638v1 | https://arxiv.org/pdf/2111.11638v1.pdf | null | [
"Xiang Song",
"Runjie Ma",
"Jiahang Li",
"Muhan Zhang",
"David Paul Wipf"
] | [
"Fraud Detection",
"Link Prediction",
"Node Classification"
] | 1,637,625,600,000 | [
{
"code_snippet_url": "",
"description": "GraphSAGE is a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data.\r\n\r\nImage from: [Inductive Representation Learning on Large Graphs](https://arxiv.org/pdf/1706.02216v4.pdf)",
"full_name": "GraphSAGE",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "GraphSAGE",
"source_title": "Inductive Representation Learning on Large Graphs",
"source_url": "http://arxiv.org/abs/1706.02216v4"
}
] | 137,499 |
210,895 | https://paperswithcode.com/paper/multiview-pseudo-labeling-for-semi-supervised | 2104.00682 | Multiview Pseudo-Labeling for Semi-supervised Learning from Video | We present a multiview pseudo-labeling approach to video learning, a novel framework that uses complementary views in the form of appearance and motion information for semi-supervised learning in video. The complementary views help obtain more reliable pseudo-labels on unlabeled video, to learn stronger video representations than from purely supervised data. Though our method capitalizes on multiple views, it nonetheless trains a model that is shared across appearance and motion input and thus, by design, incurs no additional computation overhead at inference time. On multiple video recognition datasets, our method substantially outperforms its supervised counterpart, and compares favorably to previous work on standard benchmarks in self-supervised video representation learning. | https://arxiv.org/abs/2104.00682v1 | https://arxiv.org/pdf/2104.00682v1.pdf | ICCV 2021 10 | [
"Bo Xiong",
"Haoqi Fan",
"Kristen Grauman",
"Christoph Feichtenhofer"
] | [
"Representation Learning",
"Video Recognition"
] | 1,617,235,200,000 | [] | 32,628 |
148,383 | https://paperswithcode.com/paper/density-based-clustering-for-3d-object | null | Density-Based Clustering for 3D Object Detection in Point Clouds | Current 3D detection networks either rely on 2D object proposals or try to directly predict bounding box parameters from each point in a scene. While former methods are dependent on performance of 2D detectors, latter approaches are challenging due to the sparsity and occlusion in point clouds, making it difficult to regress accurate parameters. In this work, we introduce a novel approach for 3D object detection that is significant in two main aspects: a) cascaded modular approach that focuses the receptive field of each module on specific points in the point cloud, for improved feature learning and b) a class agnostic instance segmentation module that is initiated using unsupervised clustering. The objective of a cascaded approach is to sequentially minimize the number of points running through the network. While three different modules perform the tasks of background-foreground segmentation, class agnostic instance segmentation and object detection, through individually trained point based networks. We also evaluate bayesian uncertainty in modules, demonstrating the over all level of confidence in our prediction results. Performance of the network is evaluated on the SUN RGB-D benchmark dataset, that demonstrates an improvement as compared to state-of-the-art methods.
| http://openaccess.thecvf.com/content_CVPR_2020/html/Ahmed_Density-Based_Clustering_for_3D_Object_Detection_in_Point_Clouds_CVPR_2020_paper.html | http://openaccess.thecvf.com/content_CVPR_2020/papers/Ahmed_Density-Based_Clustering_for_3D_Object_Detection_in_Point_Clouds_CVPR_2020_paper.pdf | CVPR 2020 6 | [
"Syeda Mariam Ahmed",
" Chee Meng Chew"
] | [
"3D Object Detection",
"Instance Segmentation",
"Object Detection",
"Object Detection",
"Semantic Segmentation"
] | 1,590,969,600,000 | [] | 138,924 |
222,621 | https://paperswithcode.com/paper/robust-learning-via-persistency-of-excitation | 2106.02078 | Improving Neural Network Robustness via Persistency of Excitation | Improving adversarial robustness of neural networks remains a major challenge. Fundamentally, training a neural network via gradient descent is a parameter estimation problem. In adaptive control, maintaining persistency of excitation (PoE) is integral to ensuring convergence of parameter estimates in dynamical systems to their true values. We show that parameter estimation with gradient descent can be modeled as a sampling of an adaptive linear time-varying continuous system. Leveraging this model, and with inspiration from Model-Reference Adaptive Control (MRAC), we prove a sufficient condition to constrain gradient descent updates to reference persistently excited trajectories converging to the true parameters. The sufficient condition is achieved when the learning rate is less than the inverse of the Lipschitz constant of the gradient of loss function. We provide an efficient technique for estimating the corresponding Lipschitz constant in practice using extreme value theory. Our experimental results in both standard and adversarial training illustrate that networks trained with the PoE-motivated learning rate schedule have similar clean accuracy but are significantly more robust to adversarial attacks than models trained using current state-of-the-art heuristics. | https://arxiv.org/abs/2106.02078v5 | https://arxiv.org/pdf/2106.02078v5.pdf | null | [
"Kaustubh Sridhar",
"Oleg Sokolsky",
"Insup Lee",
"James Weimer"
] | [
"Adversarial Robustness"
] | 1,622,678,400,000 | [] | 174,125 |
168,721 | https://paperswithcode.com/paper/learning-invariant-representations-and-risks-1 | 2010.04647 | Learning Invariant Representations and Risks for Semi-supervised Domain Adaptation | The success of supervised learning hinges on the assumption that the training and test data come from the same underlying distribution, which is often not valid in practice due to potential distribution shift. In light of this, most existing methods for unsupervised domain adaptation focus on achieving domain-invariant representations and small source domain error. However, recent works have shown that this is not sufficient to guarantee good generalization on the target domain, and in fact, is provably detrimental under label distribution shift. Furthermore, in many real-world applications it is often feasible to obtain a small amount of labeled data from the target domain and use them to facilitate model training with source data. Inspired by the above observations, in this paper we propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA). First, we provide a finite sample bound for both classification and regression problems under Semi-DA. The bound suggests a principled way to obtain target generalization, i.e. by aligning both the marginal and conditional distributions across domains in feature space. Motivated by this, we then introduce the LIRR algorithm for jointly \textbf{L}earning \textbf{I}nvariant \textbf{R}epresentations and \textbf{R}isks. Finally, extensive experiments are conducted on both classification and regression tasks, which demonstrates LIRR consistently achieves state-of-the-art performance and significant improvements compared with the methods that only learn invariant representations or invariant risks. | https://arxiv.org/abs/2010.04647v3 | https://arxiv.org/pdf/2010.04647v3.pdf | CVPR 2021 1 | [
"Bo Li",
"Yezhen Wang",
"Shanghang Zhang",
"Dongsheng Li",
"Trevor Darrell",
"Kurt Keutzer",
"Han Zhao"
] | [
"Domain Adaptation",
"Unsupervised Domain Adaptation"
] | 1,602,201,600,000 | [] | 171,504 |
161,499 | https://paperswithcode.com/paper/semi-supervised-learning-with-the-em | 2008.12442 | Semi-supervised Learning with the EM Algorithm: A Comparative Study between Unstructured and Structured Prediction | Semi-supervised learning aims to learn prediction models from both labeled and unlabeled samples. There has been extensive research in this area. Among existing work, generative mixture models with Expectation-Maximization (EM) is a popular method due to clear statistical properties. However, existing literature on EM-based semi-supervised learning largely focuses on unstructured prediction, assuming that samples are independent and identically distributed. Studies on EM-based semi-supervised approach in structured prediction is limited. This paper aims to fill the gap through a comparative study between unstructured and structured methods in EM-based semi-supervised learning. Specifically, we compare their theoretical properties and find that both methods can be considered as a generalization of self-training with soft class assignment of unlabeled samples, but the structured method additionally considers structural constraint in soft class assignment. We conducted a case study on real-world flood mapping datasets to compare the two methods. Results show that structured EM is more robust to class confusion caused by noise and obstacles in features in the context of the flood mapping application. | https://arxiv.org/abs/2008.12442v1 | https://arxiv.org/pdf/2008.12442v1.pdf | null | [
"Wenchong He",
"Zhe Jiang"
] | [
"Structured Prediction"
] | 1,598,572,800,000 | [] | 192,878 |
181,870 | https://paperswithcode.com/paper/a-time-series-analysis-based-forecasting | 1705.01144 | A Time Series Analysis-Based Forecasting Framework for the Indian Healthcare Sector | Designing efficient and robust algorithms for accurate prediction of stock
market prices is one of the most exciting challenges in the field of time
series analysis and forecasting. With the exponential rate of development and
evolution of sophisticated algorithms and with the availability of fast
computing platforms, it has now become possible to effectively and efficiently
extract, store, process and analyze high volume of stock market data with
diversity in its contents. Availability of complex algorithms which can execute
very fast on parallel architecture over the cloud has made it possible to
achieve higher accuracy in forecasting results while reducing the time required
for computation. In this paper, we use the time series data of the healthcare
sector of India for the period January 2010 till December 2016. We first
demonstrate a decomposition approach of the time series and then illustrate how
the decomposition results provide us with useful insights into the behavior and
properties exhibited by the time series. Further, based on the structural
analysis of the time series, we propose six different methods of forecasting
for predicting the time series index of the healthcare sector. Extensive
results are provided on the performance of the forecasting methods to
demonstrate their effectiveness. | http://arxiv.org/abs/1705.01144v1 | http://arxiv.org/pdf/1705.01144v1.pdf | null | [] | [
"Time Series",
"Time Series Analysis"
] | 1,493,078,400,000 | [] | 50,021 |
296,066 | https://paperswithcode.com/paper/temporally-precise-action-spotting-in-soccer | 2205.10450 | Temporally Precise Action Spotting in Soccer Videos Using Dense Detection Anchors | We present a model for temporally precise action spotting in videos, which uses a dense set of detection anchors, predicting a detection confidence and corresponding fine-grained temporal displacement for each anchor. We experiment with two trunk architectures, both of which are able to incorporate large temporal contexts while preserving the smaller-scale features required for precise localization: a one-dimensional version of a u-net, and a Transformer encoder (TE). We also suggest best practices for training models of this kind, by applying Sharpness-Aware Minimization (SAM) and mixup data augmentation. We achieve a new state-of-the-art on SoccerNet-v2, the largest soccer video dataset of its kind, with marked improvements in temporal localization. Additionally, our ablations show: the importance of predicting the temporal displacements; the trade-offs between the u-net and TE trunks; and the benefits of training with SAM and mixup. | https://arxiv.org/abs/2205.10450v2 | https://arxiv.org/pdf/2205.10450v2.pdf | null | [
"João V. B. Soares",
"Avijit Shah",
"Topojoy Biswas"
] | [
"Action Spotting",
"Data Augmentation",
"Temporal Localization"
] | 1,653,004,800,000 | [
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
},
{
"code_snippet_url": "https://github.com/facebookresearch/mixup-cifar10",
"description": "**Mixup** is a data augmentation technique that that generates a weighted combinations of random image pairs from the training data. Given two images and their ground truth labels: $\\left(x\\_{i}, y\\_{i}\\right), \\left(x\\_{j}, y\\_{j}\\right)$, a synthetic training example $\\left(\\hat{x}, \\hat{y}\\right)$ is generated as:\r\n\r\n$$ \\hat{x} = \\lambda{x\\_{i}} + \\left(1 − \\lambda\\right){x\\_{j}} $$\r\n$$ \\hat{y} = \\lambda{y\\_{i}} + \\left(1 − \\lambda\\right){y\\_{j}} $$\r\n\r\nwhere $\\lambda \\sim \\text{Beta}\\left(\\alpha = 0.2\\right)$ is independently sampled for each augmented example.",
"full_name": "Mixup",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.",
"name": "Image Data Augmentation",
"parent": null
},
"name": "Mixup",
"source_title": "mixup: Beyond Empirical Risk Minimization",
"source_url": "http://arxiv.org/abs/1710.09412v2"
},
{
"code_snippet_url": null,
"description": "**Sharpness-Aware Minimization**, or **SAM**, is a procedure that improves model generalization by simultaneously minimizing loss value and loss sharpness. SAM functions by seeking parameters that lie in neighborhoods having uniformly low loss value (rather than parameters that only themselves have low loss value).",
"full_name": "Sharpness-Aware Minimization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Optimization",
"parent": null
},
"name": "Sharpness-Aware Minimization",
"source_title": "Sharpness-Aware Minimization for Efficiently Improving Generalization",
"source_url": "https://arxiv.org/abs/2010.01412v3"
}
] | 82,240 |
239,860 | https://paperswithcode.com/paper/pq-transformer-jointly-parsing-3d-objects-and | 2109.05566 | PQ-Transformer: Jointly Parsing 3D Objects and Layouts from Point Clouds | 3D scene understanding from point clouds plays a vital role for various robotic applications. Unfortunately, current state-of-the-art methods use separate neural networks for different tasks like object detection or room layout estimation. Such a scheme has two limitations: 1) Storing and running several networks for different tasks are expensive for typical robotic platforms. 2) The intrinsic structure of separate outputs are ignored and potentially violated. To this end, we propose the first transformer architecture that predicts 3D objects and layouts simultaneously, using point cloud inputs. Unlike existing methods that either estimate layout keypoints or edges, we directly parameterize room layout as a set of quads. As such, the proposed architecture is termed as P(oint)Q(uad)-Transformer. Along with the novel quad representation, we propose a tailored physical constraint loss function that discourages object-layout interference. The quantitative and qualitative evaluations on the public benchmark ScanNet show that the proposed PQ-Transformer succeeds to jointly parse 3D objects and layouts, running at a quasi-real-time (8.91 FPS) rate without efficiency-oriented optimization. Moreover, the new physical constraint loss can improve strong baselines, and the F1-score of the room layout is significantly promoted from 37.9% to 57.9%. | https://arxiv.org/abs/2109.05566v2 | https://arxiv.org/pdf/2109.05566v2.pdf | null | [
"Xiaoxue Chen",
"Hao Zhao",
"Guyue Zhou",
"Ya-Qin Zhang"
] | [
"Object Detection",
"Object Detection",
"Room Layout Estimation",
"Scene Understanding"
] | 1,631,404,800,000 | [
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Non Maximum Suppression** is a computer vision method that selects a single entity out of many overlapping entities (for example bounding boxes in object detection). The criteria is usually discarding entities that are below a given probability bound. With remaining entities we repeatedly pick the entity with the highest probability, output that as the prediction, and discard any remaining box where a $\\text{IoU} \\geq 0.5$ with the box output in the previous step.\r\n\r\nImage Credit: [Martin Kersner](https://github.com/martinkersner/non-maximum-suppression-cpp)",
"full_name": "Non Maximum Suppression",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Proposal Filtering",
"parent": null
},
"name": "Non Maximum Suppression",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**PQ-Transformer**, or **PointQuad-Transformer**, is a [Transformer](https://paperswithcode.com/method/transformer)-based architecture that predicts 3D objects and layouts simultaneously, using point cloud inputs. Unlike existing methods that either estimate layout keypoints or edges, room layouts are directly parameterized as a set of quads. Along with the quad representation, a physical constraint loss function is used that discourages object-layout interference.\r\n\r\nGiven an input 3D point cloud of $N$ points, the point cloud feature learning backbone extracts $M$ context-aware point features of $\\left(3+C\\right)$ dimensions, through sampling and grouping. A voting module and a farthest point sampling (FPS) module are used to generate $K\\_{1}$ object proposals and $K\\_{2}$ quad proposals respectively. Then the proposals are processed by a transformer decoder to further refine proposal features. Through several feedforward layers and non-maximum suppression (NMS), the proposals become the final object bounding boxes and layout quads.",
"full_name": "PointQuad-Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Point Cloud Models",
"parent": null
},
"name": "PQ-Transformer",
"source_title": "PQ-Transformer: Jointly Parsing 3D Objects and Layouts from Point Clouds",
"source_url": "https://arxiv.org/abs/2109.05566v2"
}
] | 8,729 |
115,687 | https://paperswithcode.com/paper/learning-vector-valued-functions-with-local | 1909.04883 | Semi-supervised Vector-valued Learning: From Theory to Algorithm | Vector-valued learning, where the output space admits a vector-valued structure, is an important problem that covers a broad family of important domains, e.g. multi-label learning and multi-class classification. Using local Rademacher complexity and unlabeled data, we derive novel data-dependent excess risk bounds for learning vector-valued functions in both the kernel space and linear space. The derived bounds are much sharper than existing ones, where convergence rates are improved from $\mathcal{O}(1/\sqrt{n})$ to $\mathcal{O}(1/\sqrt{n+u}),$ and $\mathcal{O}(1/n)$ in special cases. Motivated by our theoretical analysis, we propose a unified framework for learning vector-valued functions, incorporating both local Rademacher complexity and Laplacian regularization. Empirical results on a wide number of benchmark datasets show that the proposed algorithm significantly outperforms baseline methods, which coincides with our theoretical findings. | https://arxiv.org/abs/1909.04883v3 | https://arxiv.org/pdf/1909.04883v3.pdf | null | [
"Jian Li",
"Yong liu",
"Weiping Wang"
] | [
"Multi-class Classification",
"Multi-Label Learning"
] | 1,568,160,000,000 | [] | 67,867 |
24,470 | https://paperswithcode.com/paper/finite-time-stabilization-of-longitudinal | 1704.01383 | Finite-Time Stabilization of Longitudinal Control for Autonomous Vehicles via a Model-Free Approach | This communication presents a longitudinal model-free control approach for
computing the wheel torque command to be applied on a vehicle. This setting
enables us to overcome the problem of unknown vehicle parameters for generating
a suitable control law. An important parameter in this control setting is made
time-varying for ensuring finite-time stability. Several convincing computer
simulations are displayed and discussed. Overshoots become therefore smaller.
The driving comfort is increased and the robustness to time-delays is improved. | http://arxiv.org/abs/1704.01383v1 | http://arxiv.org/pdf/1704.01383v1.pdf | null | [
"Philip Polack",
"Brigitte d'Andréa-Novel",
"Michel Fliess",
"Arnaud de La Fortelle",
"Lghani Menhour"
] | [
"Autonomous Vehicles"
] | 1,491,350,400,000 | [] | 13,224 |
37,202 | https://paperswithcode.com/paper/driverseat-crowdstrapping-learning-tasks-for | 1512.01872 | Driverseat: Crowdstrapping Learning Tasks for Autonomous Driving | While emerging deep-learning systems have outclassed knowledge-based
approaches in many tasks, their application to detection tasks for autonomous
technologies remains an open field for scientific exploration. Broadly, there
are two major developmental bottlenecks: the unavailability of comprehensively
labeled datasets and of expressive evaluation strategies. Approaches for
labeling datasets have relied on intensive hand-engineering, and strategies for
evaluating learning systems have been unable to identify failure-case
scenarios. Human intelligence offers an untapped approach for breaking through
these bottlenecks. This paper introduces Driverseat, a technology for embedding
crowds around learning systems for autonomous driving. Driverseat utilizes
crowd contributions for (a) collecting complex 3D labels and (b) tagging
diverse scenarios for ready evaluation of learning systems. We demonstrate how
Driverseat can crowdstrap a convolutional neural network on the lane-detection
task. More generally, crowdstrapping introduces a valuable paradigm for any
technology that can benefit from leveraging the powerful combination of human
and computer intelligence. | http://arxiv.org/abs/1512.01872v1 | http://arxiv.org/pdf/1512.01872v1.pdf | null | [
"Pranav Rajpurkar",
"Toki Migimatsu",
"Jeff Kiske",
"Royce Cheng-Yue",
"Sameep Tandon",
"Tao Wang",
"Andrew Ng"
] | [
"Autonomous Driving",
"Lane Detection"
] | 1,449,446,400,000 | [] | 135,740 |
47,047 | https://paperswithcode.com/paper/large-scale-visual-recommendations-from | 1401.1778 | Large Scale Visual Recommendations From Street Fashion Images | We describe a completely automated large scale visual recommendation system
for fashion. Our focus is to efficiently harness the availability of large
quantities of online fashion images and their rich meta-data. Specifically, we
propose four data driven models in the form of Complementary Nearest Neighbor
Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain
LDA for solving this problem. We analyze relative merits and pitfalls of these
algorithms through extensive experimentation on a large-scale data set and
baseline them against existing ideas from color science. We also illustrate key
fashion insights learned through these experiments and show how they can be
employed to design better recommendation systems. Finally, we also outline a
large-scale annotated data set of fashion images (Fashion-136K) that can be
exploited for future vision research. | http://arxiv.org/abs/1401.1778v1 | http://arxiv.org/pdf/1401.1778v1.pdf | null | [
"Vignesh Jagadeesh",
"Robinson Piramuthu",
"Anurag Bhardwaj",
"Wei Di",
"Neel Sundaresan"
] | [
"Recommendation Systems"
] | 1,389,139,200,000 | [] | 20,994 |
13,395 | https://paperswithcode.com/paper/a-generalised-seizure-prediction-with | 1707.01976 | A Generalised Seizure Prediction with Convolutional Neural Networks for Intracranial and Scalp Electroencephalogram Data Analysis | Seizure prediction has attracted a growing attention as one of the most
challenging predictive data analysis efforts in order to improve the life of
patients living with drug-resistant epilepsy and tonic seizures. Many
outstanding works have been reporting great results in providing a sensible
indirect (warning systems) or direct (interactive neural-stimulation) control
over refractory seizures, some of which achieved high performance. However,
many works put heavily handcraft feature extraction and/or carefully tailored
feature engineering to each patient to achieve very high sensitivity and low
false prediction rate for a particular dataset. This limits the benefit of
their approaches if a different dataset is used. In this paper we apply
Convolutional Neural Networks (CNNs) on different intracranial and scalp
electroencephalogram (EEG) datasets and proposed a generalized retrospective
and patient-specific seizure prediction method. We use Short-Time Fourier
Transform (STFT) on 30-second EEG windows with 50% overlapping to extract
information in both frequency and time domains. A standardization step is then
applied on STFT components across the whole frequency range to prevent high
frequencies features being influenced by those at lower frequencies. A
convolutional neural network model is used for both feature extraction and
classification to separate preictal segments from interictal ones. The proposed
approach achieves sensitivity of 81.4%, 81.2%, 82.3% and false prediction rate
(FPR) of 0.06/h, 0.16/h, 0.22/h on Freiburg Hospital intracranial EEG (iEEG)
dataset, Children's Hospital of Boston-MIT scalp EEG (sEEG) dataset, and Kaggle
American Epilepsy Society Seizure Prediction Challenge's dataset, respectively.
Our prediction method is also statistically better than an unspecific random
predictor for most of patients in all three datasets. | http://arxiv.org/abs/1707.01976v2 | http://arxiv.org/pdf/1707.01976v2.pdf | null | [
"Nhan Duy Truong",
"Anh Duy Nguyen",
"Levin Kuhlmann",
"Mohammad Reza Bonyadi",
"Jiawei Yang",
"Omid Kavehei"
] | [
"EEG",
"Feature Engineering",
"Seizure prediction"
] | 1,499,299,200,000 | [] | 177,803 |
73,265 | https://paperswithcode.com/paper/estimation-of-information-theoretic-measures | null | Estimation of Information Theoretic Measures for Continuous Random Variables | We analyze the estimation of information theoretic measures of continuous random variables such as: differential entropy, mutual information or Kullback-Leibler divergence. The objective of this paper is two-fold. First, we prove that the information theoretic measure estimates using the k-nearest-neighbor density estimation with fixed k converge almost surely, even though the k-nearest-neighbor density estimation with fixed k does not converge to its true measure. Second, we show that the information theoretic measure estimates do not converge for k growing linearly with the number of samples. Nevertheless, these nonconvergent estimates can be used for solving the two-sample problem and assessing if two random variables are independent. We show that the two-sample and independence tests based on these nonconvergent estimates compare favorably with the maximum mean discrepancy test and the Hilbert Schmidt independence criterion, respectively. | http://papers.nips.cc/paper/3417-estimation-of-information-theoretic-measures-for-continuous-random-variables | http://papers.nips.cc/paper/3417-estimation-of-information-theoretic-measures-for-continuous-random-variables.pdf | NeurIPS 2008 12 | [
"Fernando Pérez-Cruz"
] | [
"Density Estimation"
] | 1,228,089,600,000 | [] | 10,956 |
308,102 | https://paperswithcode.com/paper/capabilities-limitations-and-challenges-of | 2207.08989 | Capabilities, Limitations and Challenges of Style Transfer with CycleGANs: A Study on Automatic Ring Design Generation | Rendering programs have changed the design process completely as they permit to see how the products will look before they are fabricated. However, the rendering process is complicated and takes a significant amount of time, not only in the rendering itself but in the setting of the scene as well. Materials, lights and cameras need to be set in order to get the best quality results. Nevertheless, the optimal output may not be obtained in the first render. This all makes the rendering process a tedious process. Since Goodfellow et al. introduced Generative Adversarial Networks (GANs) in 2014 [1], they have been used to generate computer-assigned synthetic data, from non-existing human faces to medical data analysis or image style transfer. GANs have been used to transfer image textures from one domain to another. However, paired data from both domains was needed. When Zhu et al. introduced the CycleGAN model, the elimination of this expensive constraint permitted transforming one image from one domain into another, without the need for paired data. This work validates the applicability of CycleGANs on style transfer from an initial sketch to a final render in 2D that represents a 3D design, a step that is paramount in every product design process. We inquiry the possibilities of including CycleGANs as part of the design pipeline, more precisely, applied to the rendering of ring designs. Our contribution entails a crucial part of the process as it allows the customer to see the final product before buying. This work sets a basis for future research, showing the possibilities of GANs in design and establishing a starting point for novel applications to approach crafts design. | https://arxiv.org/abs/2207.08989v1 | https://arxiv.org/pdf/2207.08989v1.pdf | null | [
"Tomas Cabezon Pedroso",
"Javier Del Ser",
"Natalia Diaz-Rodriguez"
] | [
"Style Transfer"
] | 1,658,102,400,000 | [
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/eriklindernoren/PyTorch-GAN/blob/a163b82beff3d01688d8315a3fd39080400e7c01/implementations/lsgan/lsgan.py#L102",
"description": "**GAN Least Squares Loss** is a least squares loss function for generative adversarial networks. Minimizing this objective function is equivalent to minimizing the Pearson $\\chi^{2}$ divergence. The objective function (here for [LSGAN](https://paperswithcode.com/method/lsgan)) can be defined as:\r\n\r\n$$ \\min\\_{D}V\\_{LS}\\left(D\\right) = \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{x} \\sim p\\_{data}\\left(\\mathbf{x}\\right)}\\left[\\left(D\\left(\\mathbf{x}\\right) - b\\right)^{2}\\right] + \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{z}\\sim p\\_{data}\\left(\\mathbf{z}\\right)}\\left[\\left(D\\left(G\\left(\\mathbf{z}\\right)\\right) - a\\right)^{2}\\right] $$\r\n\r\n$$ \\min\\_{G}V\\_{LS}\\left(G\\right) = \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{z} \\sim p\\_{\\mathbf{z}}\\left(\\mathbf{z}\\right)}\\left[\\left(D\\left(G\\left(\\mathbf{z}\\right)\\right) - c\\right)^{2}\\right] $$\r\n\r\nwhere $a$ and $b$ are the labels for fake data and real data and $c$ denotes the value that $G$ wants $D$ to believe for fake data.",
"full_name": "GAN Least Squares Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "GAN Least Squares Loss",
"source_title": "Least Squares Generative Adversarial Networks",
"source_url": "http://arxiv.org/abs/1611.04076v3"
},
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L649",
"description": "**Leaky Rectified Linear Unit**, or **Leaky ReLU**, is a type of activation function based on a [ReLU](https://paperswithcode.com/method/relu), but it has a small slope for negative values instead of a flat slope. The slope coefficient is determined before training, i.e. it is not learnt during training. This type of activation function is popular in tasks where we we may suffer from sparse gradients, for example training generative adversarial networks.",
"full_name": "Leaky ReLU",
"introduced_year": 2014,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Leaky ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/znxlwm/pytorch-pix2pix/blob/3059f2af53324e77089bbcfc31279f01a38c40b8/network.py#L104",
"description": "**PatchGAN** is a type of discriminator for generative adversarial networks which only penalizes structure at the scale of local image patches. The PatchGAN discriminator tries to classify if each $N \\times N$ patch in an image is real or fake. This discriminator is run convolutionally across the image, averaging all responses to provide the ultimate output of $D$. Such a discriminator effectively models the image as a Markov random field, assuming independence between pixels separated by more than a patch diameter. It can be understood as a type of texture/style loss.",
"full_name": "PatchGAN",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Discriminators** are a type of module used in architectures such as generative adversarial networks to discriminate between real and generated samples. Below you can find a continuously updating list of discriminators.",
"name": "Discriminators",
"parent": null
},
"name": "PatchGAN",
"source_title": "Image-to-Image Translation with Conditional Adversarial Networks",
"source_url": "http://arxiv.org/abs/1611.07004v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/instancenorm.py#L141",
"description": "**Instance Normalization** (also known as contrast normalization) is a normalization layer where:\r\n\r\n$$\r\n y_{tijk} = \\frac{x_{tijk} - \\mu_{ti}}{\\sqrt{\\sigma_{ti}^2 + \\epsilon}},\r\n \\quad\r\n \\mu_{ti} = \\frac{1}{HW}\\sum_{l=1}^W \\sum_{m=1}^H x_{tilm},\r\n \\quad\r\n \\sigma_{ti}^2 = \\frac{1}{HW}\\sum_{l=1}^W \\sum_{m=1}^H (x_{tilm} - mu_{ti})^2.\r\n$$\r\n\r\nThis prevents instance-specific mean and covariance shift simplifying the learning process. Intuitively, the normalization process allows to remove instance-specific contrast information from the content image in a task like image stylization, which simplifies generation.",
"full_name": "Instance Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Instance Normalization",
"source_title": "Instance Normalization: The Missing Ingredient for Fast Stylization",
"source_url": "http://arxiv.org/abs/1607.08022v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/f5834b3ed339ec268f40cf56928234eed8dfeb92/models/cycle_gan_model.py#L172",
"description": "**Cycle Consistency Loss** is a type of loss used for generative adversarial networks that performs unpaired image-to-image translation. It was introduced with the [CycleGAN](https://paperswithcode.com/method/cyclegan) architecture. For two domains $X$ and $Y$, we want to learn a mapping $G : X \\rightarrow Y$ and $F: Y \\rightarrow X$. We want to enforce the intuition that these mappings should be reverses of each other and that both mappings should be bijections. Cycle Consistency Loss encourages $F\\left(G\\left(x\\right)\\right) \\approx x$ and $G\\left(F\\left(y\\right)\\right) \\approx y$. It reduces the space of possible mapping functions by enforcing forward and backwards consistency:\r\n\r\n$$ \\mathcal{L}\\_{cyc}\\left(G, F\\right) = \\mathbb{E}\\_{x \\sim p\\_{data}\\left(x\\right)}\\left[||F\\left(G\\left(x\\right)\\right) - x||\\_{1}\\right] + \\mathbb{E}\\_{y \\sim p\\_{data}\\left(y\\right)}\\left[||G\\left(F\\left(y\\right)\\right) - y||\\_{1}\\right] $$",
"full_name": "Cycle Consistency Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Cycle Consistency Loss",
"source_title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"source_url": "https://arxiv.org/abs/1703.10593v7"
},
{
"code_snippet_url": "https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/9e6fff7b7d5215a38be3cac074ca7087041bea0d/models/cycle_gan_model.py#L8",
"description": "**CycleGAN**, or **Cycle-Consistent GAN**, is a type of generative adversarial network for unpaired image-to-image translation. For two domains $X$ and $Y$, CycleGAN learns a mapping $G : X \\rightarrow Y$ and $F: Y \\rightarrow X$. The novelty lies in trying to enforce the intuition that these mappings should be reverses of each other and that both mappings should be bijections. This is achieved through a [cycle consistency loss](https://paperswithcode.com/method/cycle-consistency-loss) that encourages $F\\left(G\\left(x\\right)\\right) \\approx x$ and $G\\left(Y\\left(y\\right)\\right) \\approx y$. Combining this loss with the adversarial losses on $X$ and $Y$ yields the full objective for unpaired image-to-image translation.\r\n\r\nFor the mapping $G : X \\rightarrow Y$ and its discriminator $D\\_{Y}$ we have the objective:\r\n\r\n$$ \\mathcal{L}\\_{GAN}\\left(G, D\\_{Y}, X, Y\\right) =\\mathbb{E}\\_{y \\sim p\\_{data}\\left(y\\right)}\\left[\\log D\\_{Y}\\left(y\\right)\\right] + \\mathbb{E}\\_{x \\sim p\\_{data}\\left(x\\right)}\\left[log(1 − D\\_{Y}\\left(G\\left(x\\right)\\right)\\right] $$\r\n\r\nwhere $G$ tries to generate images $G\\left(x\\right)$ that look similar to images from domain $Y$, while $D\\_{Y}$ tries to discriminate between translated samples $G\\left(x\\right)$ and real samples $y$. A similar loss is postulated for the mapping $F: Y \\rightarrow X$ and its discriminator $D\\_{X}$.\r\n\r\nThe Cycle Consistency Loss reduces the space of possible mapping functions by enforcing forward and backwards consistency:\r\n\r\n$$ \\mathcal{L}\\_{cyc}\\left(G, F\\right) = \\mathbb{E}\\_{x \\sim p\\_{data}\\left(x\\right)}\\left[||F\\left(G\\left(x\\right)\\right) - x||\\_{1}\\right] + \\mathbb{E}\\_{y \\sim p\\_{data}\\left(y\\right)}\\left[||G\\left(F\\left(y\\right)\\right) - y||\\_{1}\\right] $$\r\n\r\nThe full objective is:\r\n\r\n$$ \\mathcal{L}\\_{GAN}\\left(G, F, D\\_{X}, D\\_{Y}\\right) = \\mathcal{L}\\_{GAN}\\left(G, D\\_{Y}, X, Y\\right) + \\mathcal{L}\\_{GAN}\\left(F, D\\_{X}, X, Y\\right) + \\lambda\\mathcal{L}\\_{cyc}\\left(G, F\\right) $$\r\n\r\nWhere we aim to solve:\r\n\r\n$$ G^{\\*}, F^{\\*} = \\arg \\min\\_{G, F} \\max\\_{D\\_{X}, D\\_{Y}} \\mathcal{L}\\_{GAN}\\left(G, F, D\\_{X}, D\\_{Y}\\right) $$\r\n\r\nFor the original architecture the authors use:\r\n\r\n- two stride-2 convolutions, several residual blocks, and two fractionally strided convolutions with stride $\\frac{1}{2}$.\r\n- [instance normalization](https://paperswithcode.com/method/instance-normalization)\r\n- PatchGANs for the discriminator\r\n- Least Square Loss for the [GAN](https://paperswithcode.com/method/gan) objectives.",
"full_name": "CycleGAN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "CycleGAN",
"source_title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"source_url": "https://arxiv.org/abs/1703.10593v7"
}
] | 25,155 |
295,523 | https://paperswithcode.com/paper/domain-generalization-via-invariant-feature | null | Domain generalization via invariant feature representation | This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice. | http://proceedings.mlr.press/v28/muandet13.html | http://proceedings.mlr.press/v28/muandet13.pdf | Proceedings of Machine Learning Research 2013 10 | [
"Krikamol Muandet",
"David Balduzzi",
"Bernhard Schölkopf"
] | [
"Domain Generalization"
] | 1,382,054,400,000 | [] | 122,701 |
288,215 | https://paperswithcode.com/paper/rmfgp-rotated-multi-fidelity-gaussian-process | 2204.04819 | RMFGP: Rotated Multi-fidelity Gaussian process with Dimension Reduction for High-dimensional Uncertainty Quantification | Multi-fidelity modelling arises in many situations in computational science and engineering world. It enables accurate inference even when only a small set of accurate data is available. Those data often come from a high-fidelity model, which is computationally expensive. By combining the realizations of the high-fidelity model with one or more low-fidelity models, the multi-fidelity method can make accurate predictions of quantities of interest. This paper proposes a new dimension reduction framework based on rotated multi-fidelity Gaussian process regression and a Bayesian active learning scheme when the available precise observations are insufficient. By drawing samples from the trained rotated multi-fidelity model, the so-called supervised dimension reduction problems can be solved following the idea of the sliced average variance estimation (SAVE) method combined with a Gaussian process regression dimension reduction technique. This general framework we develop can effectively solve high-dimensional problems while the data are insufficient for applying traditional dimension reduction methods. Moreover, a more accurate surrogate Gaussian process model of the original problem can be obtained based on our trained model. The effectiveness of the proposed rotated multi-fidelity Gaussian process(RMFGP) model is demonstrated in four numerical examples. The results show that our method has better performance in all cases and uncertainty propagation analysis is performed for last two cases involving stochastic partial differential equations. | https://arxiv.org/abs/2204.04819v1 | https://arxiv.org/pdf/2204.04819v1.pdf | null | [
"Jiahao Zhang",
"Shiqi Zhang",
"Guang Lin"
] | [
"Active Learning",
"Dimensionality Reduction"
] | 1,649,635,200,000 | [
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] | 47,755 |
120,574 | https://paperswithcode.com/paper/learning-to-rank-proposals-for-object | null | Learning to Rank Proposals for Object Detection | Non-Maximum Suppression (NMS) is an essential step of modern object detection models for removing duplicated candidates. The efficacy of NMS heavily affects the final detection results. Prior works exploit suppression criterions relying on either the objectiveness derived from classification or the overlapness produced by regression, both of which are heuristically designed and fail to explicitly link with the suppression rank. To address this issue, in this paper, we propose a novel Learning-to-Rank (LTR) model to produce the suppression rank via a learning procedure, thus facilitating the candidate generation and lifting the detection performance. In particular, we define a ranking score based on IoU to indicate the ranks of candidates during the NMS step, where candidates with high ranking score will be reserved and the ones with low ranking score will be eliminated. We design a lightweight network to predict the ranking score. We introduce a ranking loss to supervise the generation of these ranking scores, which encourages candidates with IoU to the ground-truth to rank higher. To facilitate the training procedure, we design a novel sampling strategy via dividing candidates into different levels and select hard pairs to adopt in the training. During the inference phase, this module can be exploited as a plugin to the current object detector. The training and inference of the overall framework is end-to-end. Comprehensive experiments on benchmarks PASCAL VOC and MS COCO demonstrate the generality and effectiveness of our model for facilitating existing object detectors to state-of-the-art accuracy.
| http://openaccess.thecvf.com/content_ICCV_2019/html/Tan_Learning_to_Rank_Proposals_for_Object_Detection_ICCV_2019_paper.html | http://openaccess.thecvf.com/content_ICCV_2019/papers/Tan_Learning_to_Rank_Proposals_for_Object_Detection_ICCV_2019_paper.pdf | ICCV 2019 10 | [
"Zhiyu Tan",
" Xuecheng Nie",
" Qi Qian",
" Nan Li",
" Hao Li"
] | [
"Learning-To-Rank",
"Object Detection",
"Object Detection"
] | 1,569,888,000,000 | [] | 13,100 |
32,896 | https://paperswithcode.com/paper/learning-interpretable-musical-compositional | 1606.05572 | Learning Interpretable Musical Compositional Rules and Traces | Throughout music history, theorists have identified and documented
interpretable rules that capture the decisions of composers. This paper asks,
"Can a machine behave like a music theorist?" It presents MUS-ROVER, a
self-learning system for automatically discovering rules from symbolic music.
MUS-ROVER performs feature learning via $n$-gram models to extract
compositional rules --- statistical patterns over the resulting features. We
evaluate MUS-ROVER on Bach's (SATB) chorales, demonstrating that it can recover
known rules, as well as identify new, characteristic patterns for further
study. We discuss how the extracted rules can be used in both machine and human
composition. | http://arxiv.org/abs/1606.05572v1 | http://arxiv.org/pdf/1606.05572v1.pdf | null | [
"Haizi Yu",
"Lav R. Varshney",
"Guy E. Garnett",
"Ranjitha Kumar"
] | [
"Self-Learning"
] | 1,466,121,600,000 | [] | 152,178 |
286,770 | https://paperswithcode.com/paper/unsupervised-change-detection-based-on-image | 2204.01200 | Unsupervised Change Detection Based on Image Reconstruction Loss | To train the change detector, bi-temporal images taken at different times in the same area are used. However, collecting labeled bi-temporal images is expensive and time consuming. To solve this problem, various unsupervised change detection methods have been proposed, but they still require unlabeled bi-temporal images. In this paper, we propose unsupervised change detection based on image reconstruction loss using only unlabeled single temporal single image. The image reconstruction model is trained to reconstruct the original source image by receiving the source image and the photometrically transformed source image as a pair. During inference, the model receives bi-temporal images as the input, and tries to reconstruct one of the inputs. The changed region between bi-temporal images shows high reconstruction loss. Our change detector showed significant performance in various change detection benchmark datasets even though only a single temporal single source image was used. The code and trained models will be publicly available for reproducibility. | https://arxiv.org/abs/2204.01200v2 | https://arxiv.org/pdf/2204.01200v2.pdf | null | [
"Hyeoncheol Noh",
"Jingi Ju",
"Minseok Seo",
"Jongchan Park",
"Dong-Geol Choi"
] | [
"Change Detection",
"Image Reconstruction"
] | 1,649,030,400,000 | [] | 126,194 |
200,540 | https://paperswithcode.com/paper/hebert-hebemo-a-hebrew-bert-model-and-a-tool | 2102.01909 | HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition | This paper introduces HeBERT and HebEMO. HeBERT is a Transformer-based model for modern Hebrew text, which relies on a BERT (Bidirectional Encoder Representations for Transformers) architecture. BERT has been shown to outperform alternative architectures in sentiment analysis, and is suggested to be particularly appropriate for MRLs. Analyzing multiple BERT specifications, we find that while model complexity correlates with high performance on language tasks that aim to understand terms in a sentence, a more-parsimonious model better captures the sentiment of entire sentence. Either way, out BERT-based language model outperforms all existing Hebrew alternatives on all common language tasks. HebEMO is a tool that uses HeBERT to detect polarity and extract emotions from Hebrew UGC. HebEMO is trained on a unique Covid-19-related UGC dataset that we collected and annotated for this study. Data collection and annotation followed an active learning procedure that aimed to maximize predictability. We show that HebEMO yields a high F1-score of 0.96 for polarity classification. Emotion detection reaches F1-scores of 0.78-0.97 for various target emotions, with the exception of surprise, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even among English-language models of emotion detection. | https://arxiv.org/abs/2102.01909v3 | https://arxiv.org/pdf/2102.01909v3.pdf | null | [
"Avihay Chriqui",
"Inbal Yahav"
] | [
"Active Learning",
"Emotion Recognition",
"Language Modelling",
"Sentiment Analysis"
] | 1,612,310,400,000 | [
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584",
"description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.",
"full_name": "Gaussian Error Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GELU",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v4"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)",
"full_name": "WordPiece",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "WordPiece",
"source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"source_url": "http://arxiv.org/abs/1609.08144v2"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
}
] | 133,880 |
188,235 | https://paperswithcode.com/paper/deep-learning-decoding-of-mental-state-in-non | 1911.05661 | Deep Learning Decoding of Mental State in Non-invasive Brain Computer Interface | Brain computer interface (BCI) has been popular as a key approach to monitor
our brains recent year. Mental states monitoring is one of the most important
BCI applications and becomes increasingly accessible. However, the mental state
prediction accuracy and generality through encephalogram (EEG) are not good
enough for everyday use. Here in this paper we present a deep learning-based
EEG decoding method to read mental states. We propose a novel 1D convolutional
neural network with different filter lengths to capture different frequency
bands information. To improve the prediction accuracy, we also used a
resnet-like structure to train a relatively deep convolutional neural network
to promote feature extraction. Compared with traditional ways of predictions
such as KNN and SVM, we achieved a significantly better result with an accuracy
of 96.40%. Also, in contrast with some already published open source deep
neural network structures, our methods achieved the state of art prediction
accuracy on a mental state recognition dataset. Our results demonstrate using
only 1D convolution could extract the features of EEG and the possibility of
mental state prediction using portable EEG devices. | http://arxiv.org/abs/1911.05661v1 | http://arxiv.org/pdf/1911.05661v1.pdf | null | [] | [
"EEG",
"Eeg Decoding"
] | 1,573,430,400,000 | [] | 43,311 |
114,734 | https://paperswithcode.com/paper/online-pedestrian-group-walking-event | 1909.01258 | Online Pedestrian Group Walking Event Detection Using Spectral Analysis of Motion Similarity Graph | A method for online identification of group of moving objects in the video is proposed in this paper. This method at each frame identifies group of tracked objects with similar local instantaneous motion pattern using spectral clustering on motion similarity graph. Then, the output of the algorithm is used to detect the event of more than two object moving together as required by PETS2015 challenge. The performance of the algorithm is evaluated on the PETS2015 dataset. | https://arxiv.org/abs/1909.01258v1 | https://arxiv.org/pdf/1909.01258v1.pdf | null | [
"Vahid Bastani",
"Damian Campo",
"Lucio Marcenaro",
"Carlo S. Regazzoni"
] | [
"Event Detection"
] | 1,567,468,800,000 | [
{
"code_snippet_url": "",
"description": "Spectral clustering has attracted increasing attention due to\r\nthe promising ability in dealing with nonlinearly separable datasets [15], [16]. In spectral clustering, the spectrum of the graph Laplacian is used to reveal the cluster structure. The spectral clustering algorithm mainly consists of two steps: 1) constructs the low dimensional embedded representation of the data based on the eigenvectors of the graph Laplacian, 2) applies k-means on the constructed low dimensional data to obtain the clustering result. Thus,",
"full_name": "Spectral Clustering",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Spectral Clustering",
"source_title": "A Tutorial on Spectral Clustering",
"source_url": "http://arxiv.org/abs/0711.0189v1"
}
] | 176,571 |
100,302 | https://paperswithcode.com/paper/supervized-segmentation-with-graph-structured | 1905.04014 | Supervized Segmentation with Graph-Structured Deep Metric Learning | We present a fully-supervized method for learning to segment data structured by an adjacency graph. We introduce the graph-structured contrastive loss, a loss function structured by a ground truth segmentation. It promotes learning vertex embeddings which are homogeneous within desired segments, and have high contrast at their interface. Thus, computing a piecewise-constant approximation of such embeddings produces a graph-partition close to the objective segmentation. This loss is fully backpropagable, which allows us to learn vertex embeddings with deep learning algorithms. We evaluate our methods on a 3D point cloud oversegmentation task, defining a new state-of-the-art by a large margin. These results are based on the published work of Landrieu and Boussaha 2019. | https://arxiv.org/abs/1905.04014v2 | https://arxiv.org/pdf/1905.04014v2.pdf | null | [
"Loic Landrieu",
"Mohamed Boussaha"
] | [
"Metric Learning"
] | 1,557,446,400,000 | [] | 123,999 |
253,105 | https://paperswithcode.com/paper/correcting-the-misuse-a-method-for-the | null | Correcting the Misuse: A Method for the Chinese Idiom Cloze Test | The cloze test for Chinese idioms is a new challenge in machine reading comprehension: given a sentence with a blank, choosing a candidate Chinese idiom which matches the context. Chinese idiom is a type of Chinese idiomatic expression. The common misuse of Chinese idioms leads to error in corpus and causes error in the learned semantic representation of Chinese idioms. In this paper, we introduce the definition written by Chinese experts to correct the misuse. We propose a model for the Chinese idiom cloze test integrating various information effectively. We propose an attention mechanism called Attribute Attention to balance the weight of different attributes among different descriptions of the Chinese idiom. Besides the given candidates of every blank, we also try to choose the answer from all Chinese idioms that appear in the dataset as the extra loss due to the uniqueness and specificity of Chinese idioms. In experiments, our model outperforms the state-of-the-art model. | https://aclanthology.org/2020.deelio-1.1 | https://aclanthology.org/2020.deelio-1.1.pdf | EMNLP (DeeLIO) 2020 11 | [
"Xinyu Wang",
"Hongsheng Zhao",
"Tan Yang",
"Hongbo Wang"
] | [
"Cloze Test",
"Machine Reading Comprehension",
"Reading Comprehension"
] | 1,604,188,800,000 | [] | 86,305 |
281,475 | https://paperswithcode.com/paper/synthetic-defect-generation-for-display-front | 2203.03429 | Synthetic Defect Generation for Display Front-of-Screen Quality Inspection: A Survey | Display front-of-screen (FOS) quality inspection is essential for the mass production of displays in the manufacturing process. However, the severe imbalanced data, especially the limited number of defect samples, has been a long-standing problem that hinders the successful application of deep learning algorithms. Synthetic defect data generation can help address this issue. This paper reviews the state-of-the-art synthetic data generation methods and the evaluation metrics that can potentially be applied to display FOS quality inspection tasks. | https://arxiv.org/abs/2203.03429v1 | https://arxiv.org/pdf/2203.03429v1.pdf | null | [
"Shancong Mou",
"Meng Cao",
"Zhendong Hong",
"Ping Huang",
"Jiulong Shan",
"Jianjun Shi"
] | [
"Synthetic Data Generation"
] | 1,646,265,600,000 | [] | 2,240 |
206,410 | https://paperswithcode.com/paper/trainless-model-performance-estimation-for | 2103.08312 | Trainless Model Performance Estimation for Neural Architecture Search | Neural architecture search has become an indispensable part of the deep learning field. Modern methods allow to find one of the best performing architectures, or to build one from scratch, but they typically make decisions based on the trained accuracy information. In the present article we explore instead how the architectural component of a neural network affects its prediction power. We focus on relationships between the trained accuracy of an architecture and its accuracy prior to training, by considering statistics over multiple initialisations. We observe that minimising the coefficient of variation of the untrained accuracy, $CV_{U}$, consistently leads to better performing architectures. We test the $CV_{U}$ as a neural architecture search scoring metric using the NAS-Bench-201 database of trained neural architectures. The architectures with the lowest $CV_{U}$ value have on average an accuracy of $91.90 \pm 2.27$, $64.08 \pm 5.63$ and $38.76 \pm 6.62$ for CIFAR-10, CIFAR-100 and a downscaled version of ImageNet, respectively. Since these values are statistically above the random baseline, we make a conclusion that a good architecture should be stable against weights initialisations. It takes about $190$ s for CIFAR-10 and CIFAR-100 and $133.9$ s for ImageNet16-120 to process $100$ architectures, on a batch of $256$ images, with $100$ initialisations. | https://arxiv.org/abs/2103.08312v2 | https://arxiv.org/pdf/2103.08312v2.pdf | null | [
"Ekaterina Gracheva"
] | [
"Neural Architecture Search"
] | 1,615,334,400,000 | [] | 174,806 |
162,258 | https://paperswithcode.com/paper/private-weighted-random-walk-stochastic | 2009.01790 | Private Weighted Random Walk Stochastic Gradient Descent | We consider a decentralized learning setting in which data is distributed over nodes in a graph. The goal is to learn a global model on the distributed data without involving any central entity that needs to be trusted. While gossip-based stochastic gradient descent (SGD) can be used to achieve this learning objective, it incurs high communication and computation costs, since it has to wait for all the local models at all the nodes to converge. To speed up the convergence, we propose instead to study random walk based SGD in which a global model is updated based on a random walk on the graph. We propose two algorithms based on two types of random walks that achieve, in a decentralized way, uniform sampling and importance sampling of the data. We provide a non-asymptotic analysis on the rate of convergence, taking into account the constants related to the data and the graph. Our numerical results show that the weighted random walk based algorithm has a better performance for high-variance data. Moreover, we propose a privacy-preserving random walk algorithm that achieves local differential privacy based on a Gamma noise mechanism that we propose. We also give numerical results on the convergence of this algorithm and show that it outperforms additive Laplace-based privacy mechanisms. | https://arxiv.org/abs/2009.01790v2 | https://arxiv.org/pdf/2009.01790v2.pdf | null | [
"Ghadir Ayache",
"Salim El Rouayheb"
] | [
"Privacy Preserving"
] | 1,599,091,200,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] | 81,118 |
244,987 | https://paperswithcode.com/paper/data-augmentation-approaches-in-natural | 2110.01852 | Data Augmentation Approaches in Natural Language Processing: A Survey | As an effective strategy, data augmentation (DA) alleviates data scarcity scenarios where deep learning techniques may fail. It is widely applied in computer vision then introduced to natural language processing and achieves improvements in many tasks. One of the main focuses of the DA methods is to improve the diversity of training data, thereby helping the model to better generalize to unseen testing data. In this survey, we frame DA methods into three categories based on the diversity of augmented data, including paraphrasing, noising, and sampling. Our paper sets out to analyze DA methods in detail according to the above categories. Further, we also introduce their applications in NLP tasks as well as the challenges. Some helpful resources are provided in the appendix. | https://arxiv.org/abs/2110.01852v3 | https://arxiv.org/pdf/2110.01852v3.pdf | null | [
"Bohan Li",
"Yutai Hou",
"Wanxiang Che"
] | [
"Data Augmentation"
] | 1,633,392,000,000 | [] | 82,498 |
307,261 | https://paperswithcode.com/paper/a-single-self-supervised-model-for-many | 2207.07036 | A Single Self-Supervised Model for Many Speech Modalities Enables Zero-Shot Modality Transfer | While audio-visual speech models can yield superior performance and robustness compared to audio-only models, their development and adoption are hindered by the lack of labeled and unlabeled audio-visual data and the cost to deploy one model per modality. In this paper, we present u-HuBERT, a self-supervised pre-training framework that can leverage both multimodal and unimodal speech with a unified masked cluster prediction objective. By utilizing modality dropout during pre-training, we demonstrate that a single fine-tuned model can achieve performance on par or better than the state-of-the-art modality-specific models. Moreover, our model fine-tuned only on audio can perform well with audio-visual and visual speech input, achieving zero-shot modality generalization for speech recognition and speaker verification. In particular, our single model yields 1.2%/1.4%/27.2% speech recognition word error rate on LRS3 with audio-visual/audio/visual input. | https://arxiv.org/abs/2207.07036v1 | https://arxiv.org/pdf/2207.07036v1.pdf | null | [
"Wei-Ning Hsu",
"Bowen Shi"
] | [
"Speaker Verification",
"Speech Recognition",
"Speech Recognition"
] | 1,657,756,800,000 | [
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
}
] | 132,951 |
12,597 | https://paperswithcode.com/paper/combining-weakly-and-webly-supervised | 1712.08730 | Combining Weakly and Webly Supervised Learning for Classifying Food Images | Food classification from images is a fine-grained classification problem.
Manual curation of food images is cost, time and scalability prohibitive. On
the other hand, web data is available freely but contains noise. In this paper,
we address the problem of classifying food images with minimal data curation.
We also tackle a key problems with food images from the web where they often
have multiple cooccuring food types but are weakly labeled with a single label.
We first demonstrate that by sequentially adding a few manually curated samples
to a larger uncurated dataset from two web sources, the top-1 classification
accuracy increases from 50.3% to 72.8%. To tackle the issue of weak labels, we
augment the deep model with Weakly Supervised learning (WSL) that results in an
increase in performance to 76.2%. Finally, we show some qualitative results to
provide insights into the performance improvements using the proposed ideas. | http://arxiv.org/abs/1712.08730v1 | http://arxiv.org/pdf/1712.08730v1.pdf | null | [
"Parneet Kaur",
"Karan Sikka",
"Ajay Divakaran"
] | [
"Classification",
"Classification"
] | 1,513,987,200,000 | [] | 166,332 |
260,546 | https://paperswithcode.com/paper/tweetsumm-a-dialog-summarization-dataset-for-1 | 2111.11894 | TWEETSUMM -- A Dialog Summarization Dataset for Customer Service | In a typical customer service chat scenario, customers contact a support center to ask for help or raise complaints, and human agents try to solve the issues. In most cases, at the end of the conversation, agents are asked to write a short summary emphasizing the problem and the proposed solution, usually for the benefit of other agents that may have to deal with the same customer or issue. The goal of the present article is advancing the automation of this task. We introduce the first large scale, high quality, customer care dialog summarization dataset with close to 6500 human annotated summaries. The data is based on real-world customer support dialogs and includes both extractive and abstractive summaries. We also introduce a new unsupervised, extractive summarization method specific to dialogs. | https://arxiv.org/abs/2111.11894v1 | https://arxiv.org/pdf/2111.11894v1.pdf | null | [
"Guy Feigenblat",
"Chulaka Gunasekara",
"Benjamin Sznajder",
"Sachindra Joshi",
"David Konopnicki",
"Ranit Aharonov"
] | [
"Extractive Summarization",
"Unsupervised Extractive Summarization"
] | 1,637,625,600,000 | [] | 126,737 |
244,195 | https://paperswithcode.com/paper/tree-constrained-graph-neural-networks-for | 2110.00124 | Tree-Constrained Graph Neural Networks For Argument Mining | We propose a novel architecture for Graph Neural Networks that is inspired by the idea behind Tree Kernels of measuring similarity between trees by taking into account their common substructures, named fragments. By imposing a series of regularization constraints to the learning problem, we exploit a pooling mechanism that incorporates such notion of fragments within the node soft assignment function that produces the embeddings. We present an extensive experimental evaluation on a collection of sentence classification tasks conducted on several argument mining corpora, showing that the proposed approach performs well with respect to state-of-the-art techniques. | https://arxiv.org/abs/2110.00124v1 | https://arxiv.org/pdf/2110.00124v1.pdf | null | [
"Federico Ruggeri",
"Marco Lippi",
"Paolo Torroni"
] | [
"Argument Mining",
"Sentence Classification"
] | 1,630,540,800,000 | [] | 148,489 |
158,481 | https://paperswithcode.com/paper/select-extract-and-generate-neural-keyphrase | 2008.01739 | Select, Extract and Generate: Neural Keyphrase Generation with Layer-wise Coverage Attention | Natural language processing techniques have demonstrated promising results in keyphrase generation. However, one of the major challenges in \emph{neural} keyphrase generation is processing long documents using deep neural networks. Generally, documents are truncated before given as inputs to neural networks. Consequently, the models may miss essential points conveyed in the target document. To overcome this limitation, we propose \emph{SEG-Net}, a neural keyphrase generation model that is composed of two major components, (1) a selector that selects the salient sentences in a document and (2) an extractor-generator that jointly extracts and generates keyphrases from the selected sentences. SEG-Net uses Transformer, a self-attentive architecture, as the basic building block with a novel \emph{layer-wise} coverage attention to summarize most of the points discussed in the document. The experimental results on seven keyphrase generation benchmarks from scientific and web documents demonstrate that SEG-Net outperforms the state-of-the-art neural generative methods by a large margin. | https://arxiv.org/abs/2008.01739v2 | https://arxiv.org/pdf/2008.01739v2.pdf | ACL 2021 5 | [
"Wasi Uddin Ahmad",
"Xiao Bai",
"Soomin Lee",
"Kai-Wei Chang"
] | [
"Keyphrase Extraction",
"Keyphrase Generation"
] | 1,596,499,200,000 | [
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
}
] | 62,016 |
173,006 | https://paperswithcode.com/paper/optimize-what-matters-training-dnn-hmm | 2011.01151 | Optimize what matters: Training DNN-HMM Keyword Spotting Model Using End Metric | Deep Neural Network--Hidden Markov Model (DNN-HMM) based methods have been successfully used for many always-on keyword spotting algorithms that detect a wake word to trigger a device. The DNN predicts the state probabilities of a given speech frame, while HMM decoder combines the DNN predictions of multiple speech frames to compute the keyword detection score. The DNN, in prior methods, is trained independent of the HMM parameters to minimize the cross-entropy loss between the predicted and the ground-truth state probabilities. The mis-match between the DNN training loss (cross-entropy) and the end metric (detection score) is the main source of sub-optimal performance for the keyword spotting task. We address this loss-metric mismatch with a novel end-to-end training strategy that learns the DNN parameters by optimizing for the detection score. To this end, we make the HMM decoder (dynamic programming) differentiable and back-propagate through it to maximize the score for the keyword and minimize the scores for non-keyword speech segments. Our method does not require any change in the model architecture or the inference framework; therefore, there is no overhead in run-time memory or compute requirements. Moreover, we show significant reduction in false rejection rate (FRR) at the same false trigger experience (> 70% over independent DNN training). | https://arxiv.org/abs/2011.01151v2 | https://arxiv.org/pdf/2011.01151v2.pdf | null | [
"Ashish Shrivastava",
"Arnav Kundu",
"Chandra Dhir",
"Devang Naik",
"Oncel Tuzel"
] | [
"Keyword Spotting"
] | 1,604,275,200,000 | [] | 71,067 |
300,790 | https://paperswithcode.com/paper/superresolution-and-segmentation-of-oct-scans | 2206.05277 | Superresolution and Segmentation of OCT scans using Multi-Stage adversarial Guided Attention Training | Optical coherence tomography (OCT) is one of the non-invasive and easy-to-acquire biomarkers (the thickness of the retinal layers, which is detectable within OCT scans) being investigated to diagnose Alzheimer's disease (AD). This work aims to segment the OCT images automatically; however, it is a challenging task due to various issues such as the speckle noise, small target region, and unfavorable imaging conditions. In our previous work, we have proposed the multi-stage & multi-discriminatory generative adversarial network (MultiSDGAN) to translate OCT scans in high-resolution segmentation labels. In this investigation, we aim to evaluate and compare various combinations of channel and spatial attention to the MultiSDGAN architecture to extract more powerful feature maps by capturing rich contextual relationships to improve segmentation performance. Moreover, we developed and evaluated a guided mutli-stage attention framework where we incorporated a guided attention mechanism by forcing an L-1 loss between a specifically designed binary mask and the generated attention maps. Our ablation study results on the WVU-OCT data-set in five-fold cross-validation (5-CV) suggest that the proposed MultiSDGAN with a serial attention module provides the most competitive performance, and guiding the spatial attention feature maps by binary masks further improves the performance in our proposed network. Comparing the baseline model with adding the guided-attention, our results demonstrated relative improvements of 21.44% and 19.45% on the Dice coefficient and SSIM, respectively. | https://arxiv.org/abs/2206.05277v1 | https://arxiv.org/pdf/2206.05277v1.pdf | null | [
"Paria Jeihouni",
"Omid Dehzangi",
"Annahita Amireskandari",
"Ali Dabouei",
"Ali Rezai",
"Nasser M. Nasrabadi"
] | [
"SSIM"
] | 1,654,819,200,000 | [] | 128,457 |
316,794 | https://paperswithcode.com/paper/integrating-form-and-meaning-a-multi-task | 2209.06633 | Integrating Form and Meaning: A Multi-Task Learning Model for Acoustic Word Embeddings | Models of acoustic word embeddings (AWEs) learn to map variable-length spoken word segments onto fixed-dimensionality vector representations such that different acoustic exemplars of the same word are projected nearby in the embedding space. In addition to their speech technology applications, AWE models have been shown to predict human performance on a variety of auditory lexical processing tasks. Current AWE models are based on neural networks and trained in a bottom-up approach that integrates acoustic cues to build up a word representation given an acoustic or symbolic supervision signal. Therefore, these models do not leverage or capture high-level lexical knowledge during the learning process. In this paper, we propose a multi-task learning model that incorporates top-down lexical knowledge into the training procedure of AWEs. Our model learns a mapping between the acoustic input and a lexical representation that encodes high-level information such as word semantics in addition to bottom-up form-based supervision. We experiment with three languages and demonstrate that incorporating lexical knowledge improves the embedding space discriminability and encourages the model to better separate lexical categories. | https://arxiv.org/abs/2209.06633v2 | https://arxiv.org/pdf/2209.06633v2.pdf | null | [
"Badr M. Abdullah",
"Bernd Möbius",
"Dietrich Klakow"
] | [
"Multi-Task Learning",
"Word Embeddings"
] | 1,663,113,600,000 | [] | 154,559 |
146,260 | https://paperswithcode.com/paper/automated-discovery-of-mathematical | null | Automated Discovery of Mathematical Definitions in Text | Automatic definition extraction from texts is an important task that has numerous applications in several natural language processing fields such as summarization, analysis of scientific texts, automatic taxonomy generation, ontology generation, concept identification, and question answering. For definitions that are contained within a single sentence, this problem can be viewed as a binary classification of sentences into definitions and non-definitions. Definitions in scientific literature can be generic (Wikipedia) or more formal (mathematical articles). In this paper, we focus on automatic detection of one-sentence definitions in mathematical texts, which are difficult to separate from surrounding text. We experiment with several data representations, which include sentence syntactic structure and word embeddings, and apply deep learning methods such as convolutional neural network (CNN) and recurrent neural network (RNN), in order to identify mathematical definitions. Our experiments demonstrate the superiority of CNN and its combination with RNN, applied on the syntactically-enriched input representation. We also present a new dataset for definition extraction from mathematical texts. We demonstrate that the use of this dataset for training learning models improves the quality of definition extraction when these models are then used for other definition datasets. Our experiments with different domains approve that mathematical definitions require special treatment, and that using cross-domain learning is inefficient. | https://aclanthology.org/2020.lrec-1.256 | https://aclanthology.org/2020.lrec-1.256.pdf | LREC 2020 5 | [
"Natalia Vanetik",
"Marina Litvak",
"Sergey Shevchuk",
"Lior Reznik"
] | [
"Definition Extraction",
"Question Answering",
"Word Embeddings"
] | 1,588,291,200,000 | [] | 5,914 |
283,740 | https://paperswithcode.com/paper/latency-optimization-for-blockchain-empowered | 2203.09670 | Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing | In this paper, we study a new latency optimization problem for blockchain-based federated learning (BFL) in multi-server edge computing. In this system model, distributed mobile devices (MDs) communicate with a set of edge servers (ESs) to handle both machine learning (ML) model training and block mining simultaneously. To assist the ML model training for resource-constrained MDs, we develop an offloading strategy that enables MDs to transmit their data to one of the associated ESs. We then propose a new decentralized ML model aggregation solution at the edge layer based on a consensus mechanism to build a global ML model via peer-to-peer (P2P)-based blockchain communications. Blockchain builds trust among MDs and ESs to facilitate reliable ML model sharing and cooperative consensus formation, and enables rapid elimination of manipulated models caused by poisoning attacks. We formulate latency-aware BFL as an optimization aiming to minimize the system latency via joint consideration of the data offloading decisions, MDs' transmit power, channel bandwidth allocation for MDs' data offloading, MDs' computational allocation, and hash power allocation. Given the mixed action space of discrete offloading and continuous allocation variables, we propose a novel deep reinforcement learning scheme with a parameterized advantage actor critic algorithm. We theoretically characterize the convergence properties of BFL in terms of the aggregation delay, mini-batch size, and number of P2P communication rounds. Our numerical evaluation demonstrates the superiority of our proposed scheme over baselines in terms of model training efficiency, convergence rate, system latency, and robustness against model poisoning attacks. | https://arxiv.org/abs/2203.09670v2 | https://arxiv.org/pdf/2203.09670v2.pdf | null | [
"Dinh C. Nguyen",
"Seyyedali Hosseinalipour",
"David J. Love",
"Pubudu N. Pathirana",
"Christopher G. Brinton"
] | [
"Edge-computing",
"Federated Learning",
"Model Poisoning"
] | 1,647,561,600,000 | [] | 18,907 |
69,253 | https://paperswithcode.com/paper/generating-questions-and-multiple-choice | null | Generating Questions and Multiple-Choice Answers using Semantic Analysis of Texts | We present a novel approach to automated question generation that improves upon prior work both from a technology perspective and from an assessment perspective. Our system is aimed at engaging language learners by generating multiple-choice questions which utilize specific inference steps over multiple sentences, namely coreference resolution and paraphrase detection. The system also generates correct answers and semantically-motivated phrase-level distractors as answer choices. Evaluation by human annotators indicates that our approach requires a larger number of inference steps, which necessitate deeper semantic understanding of texts than a traditional single-sentence approach. | https://aclanthology.org/C16-1107 | https://aclanthology.org/C16-1107.pdf | COLING 2016 12 | [
"Jun Araki",
"Dheeraj Rajagopal",
"Sreecharan Sankaranarayanan",
"Susan Holm",
"Yukari Yamakawa",
"Teruko Mitamura"
] | [
"Coreference Resolution",
"Multiple-choice",
"Question Generation",
"Reading Comprehension"
] | 1,480,550,400,000 | [] | 26,834 |
246,598 | https://paperswithcode.com/paper/dont-throw-away-that-linear-head-few-shot | null | Don’t throw away that linear head: Few-shot protein fitness prediction with generative models | Predicting the fitness, i.e. functional value, of a protein sequence is an important and challenging task in biology, particularly due to the scarcity of assay-labeled data. Traditional approaches utilize transfer learning from evolutionary data, yet discard useful information from a generative model’s learned probability distribution. We propose generative fitness fine-tuning, termed gf-tuning, to utilize the generative model’s log probabilities as logits for a pairwise ranking loss---allowing for the full distribution learned in unsupervised training to be repurposed for fine-tuning on assay-labeled fitness data. We demonstrate that gf-tuning achieves better performance than existing baselines across a variety of few-shot fitness prediction settings, including both low homology and highly epistatic systems as well as generalizing from single to multiple mutations. Generative fitness finetuning offers an effective strategy for few-shot fitness prediction which could enable advances to better understand and engineer proteins. | https://openreview.net/forum?id=hHmtmT58pSL | https://openreview.net/pdf?id=hHmtmT58pSL | null | [
"Ben Krause",
"Nikhil Naik",
"Wenhao Liu",
"Ali Madani"
] | [
"Transfer Learning"
] | 1,632,873,600,000 | [] | 113,229 |
95,616 | https://paperswithcode.com/paper/emotion-action-detection-and-emotion | 1903.06901 | Emotion Action Detection and Emotion Inference: the Task and Dataset | Many Natural Language Processing works on emotion analysis only focus on
simple emotion classification without exploring the potentials of putting
emotion into "event context", and ignore the analysis of emotion-related
events. One main reason is the lack of this kind of corpus. Here we present
Cause-Emotion-Action Corpus, which manually annotates not only emotion, but
also cause events and action events. We propose two new tasks based on the
data-set: emotion causality and emotion inference. The first task is to extract
a triple (cause, emotion, action). The second task is to infer the probable
emotion. We are currently releasing the data-set with 10,603 samples and 15,892
events, basic statistic analysis and baseline on both emotion causality and
emotion inference tasks. Baseline performance demonstrates that there is much
room for both tasks to be improved. | http://arxiv.org/abs/1903.06901v1 | http://arxiv.org/pdf/1903.06901v1.pdf | null | [
"Pengyuan Liu",
"Chengyu Du",
"Shuofeng Zhao",
"Chenghao Zhu"
] | [
"Action Detection",
"Emotion Classification",
"Emotion Recognition",
"Classification"
] | 1,552,694,400,000 | [] | 177,734 |
74,675 | https://paperswithcode.com/paper/engan-latent-space-mcmc-and-maximum-entropy | null | EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based Models | Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function (unnormalized log-density) which is low for probable ones and high for improbable ones. Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution. Whereas the critic (or discriminator) in generative adversarial networks (GANs) learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection.
This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain (MCMC) in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples. For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples. These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function. To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information. We find that in addition to producing a useful scoring function for anomaly detection, the resulting approach produces sharp samples (like GANs) while covering the modes well, leading to high Inception and Fréchet scores.
| https://openreview.net/forum?id=HJlmhs05tm | https://openreview.net/pdf?id=HJlmhs05tm | ICLR 2019 5 | [
"Rithesh Kumar",
"Anirudh Goyal",
"Aaron Courville",
"Yoshua Bengio"
] | [
"Anomaly Detection"
] | 1,556,668,800,000 | [] | 36,332 |
59,745 | https://paperswithcode.com/paper/emergence-of-addictive-behaviors-in | 1811.05590 | Emergence of Addictive Behaviors in Reinforcement Learning Agents | This paper presents a novel approach to the technical analysis of wireheading
in intelligent agents. Inspired by the natural analogues of wireheading and
their prevalent manifestations, we propose the modeling of such phenomenon in
Reinforcement Learning (RL) agents as psychological disorders. In a preliminary
step towards evaluating this proposal, we study the feasibility and dynamics of
emergent addictive policies in Q-learning agents in the tractable environment
of the game of Snake. We consider a slightly modified settings for this game,
in which the environment provides a "drug" seed alongside the original
"healthy" seed for the consumption of the snake. We adopt and extend an
RL-based model of natural addiction to Q-learning agents in this settings, and
derive sufficient parametric conditions for the emergence of addictive
behaviors in such agents. Furthermore, we evaluate our theoretical analysis
with three sets of simulation-based experiments. The results demonstrate the
feasibility of addictive wireheading in RL agents, and provide promising venues
of further research on the psychopathological modeling of complex AI safety
problems. | http://arxiv.org/abs/1811.05590v1 | http://arxiv.org/pdf/1811.05590v1.pdf | null | [
"Vahid Behzadan",
"Roman V. Yampolskiy",
"Arslan Munir"
] | [
"Q-Learning",
"reinforcement-learning"
] | 1,542,153,600,000 | [
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
}
] | 16,685 |
14,110 | https://paperswithcode.com/paper/sparse-inverse-covariance-estimation-for | 1711.09131 | Sparse Inverse Covariance Estimation for Chordal Structures | In this paper, we consider the Graphical Lasso (GL), a popular optimization
problem for learning the sparse representations of high-dimensional datasets,
which is well-known to be computationally expensive for large-scale problems.
Recently, we have shown that the sparsity pattern of the optimal solution of GL
is equivalent to the one obtained from simply thresholding the sample
covariance matrix, for sparse graphs under different conditions. We have also
derived a closed-form solution that is optimal when the thresholded sample
covariance matrix has an acyclic structure. As a major generalization of the
previous result, in this paper we derive a closed-form solution for the GL for
graphs with chordal structures. We show that the GL and thresholding
equivalence conditions can significantly be simplified and are expected to hold
for high-dimensional problems if the thresholded sample covariance matrix has a
chordal structure. We then show that the GL and thresholding equivalence is
enough to reduce the GL to a maximum determinant matrix completion problem and
drive a recursive closed-form solution for the GL when the thresholded sample
covariance matrix has a chordal structure. For large-scale problems with up to
450 million variables, the proposed method can solve the GL problem in less
than 2 minutes, while the state-of-the-art methods converge in more than 2
hours. | http://arxiv.org/abs/1711.09131v1 | http://arxiv.org/pdf/1711.09131v1.pdf | null | [
"Salar Fattahi",
"Richard Y. Zhang",
"Somayeh Sojoudi"
] | [
"Matrix Completion"
] | 1,511,481,600,000 | [] | 62,563 |
164,535 | https://paperswithcode.com/paper/sync-a-copula-based-framework-for-generating | 2009.09471 | SYNC: A Copula based Framework for Generating Synthetic Data from Aggregated Sources | A synthetic dataset is a data object that is generated programmatically, and it may be valuable to creating a single dataset from multiple sources when direct collection is difficult or costly. Although it is a fundamental step for many data science tasks, an efficient and standard framework is absent. In this paper, we study a specific synthetic data generation task called downscaling, a procedure to infer high-resolution, harder-to-collect information (e.g., individual level records) from many low-resolution, easy-to-collect sources, and propose a multi-stage framework called SYNC (Synthetic Data Generation via Gaussian Copula). For given low-resolution datasets, the central idea of SYNC is to fit Gaussian copula models to each of the low-resolution datasets in order to correctly capture dependencies and marginal distributions, and then sample from the fitted models to obtain the desired high-resolution subsets. Predictive models are then used to merge sampled subsets into one, and finally, sampled datasets are scaled according to low-resolution marginal constraints. We make four key contributions in this work: 1) propose a novel framework for generating individual level data from aggregated data sources by combining state-of-the-art machine learning and statistical techniques, 2) perform simulation studies to validate SYNC's performance as a synthetic data generation algorithm, 3) demonstrate its value as a feature engineering tool, as well as an alternative to data collection in situations where gathering is difficult through two real-world datasets, 4) release an easy-to-use framework implementation for reproducibility and scalability at the production level that easily incorporates new data. | https://arxiv.org/abs/2009.09471v1 | https://arxiv.org/pdf/2009.09471v1.pdf | null | [
"Zheng Li",
"Yue Zhao",
"Jialin Fu"
] | [
"Feature Engineering",
"Synthetic Data Generation"
] | 1,600,560,000,000 | [] | 30,470 |
142,259 | https://paperswithcode.com/paper/a-session-based-song-recommendation-approach | 2004.13007 | A session-based song recommendation approach involving user characterization along the play power-law distribution | In recent years, streaming music platforms have become very popular mainly due to the huge number of songs these systems make available to users. This enormous availability means that recommendation mechanisms that help users to select the music they like need to be incorporated. However, developing reliable recommender systems in the music field involves dealing with many problems, some of which are generic and widely studied in the literature, while others are specific to this application domain and are therefore less well-known. This work is focused on two important issues that have not received much attention: managing gray-sheep users and obtaining implicit ratings. The first one is usually addressed by resorting to content information that is often difficult to obtain. The other drawback is related to the sparsity problem that arises when there are obstacles to gather explicit ratings. In this work, the referred shortcomings are addressed by means of a recommendation approach based on the users' streaming sessions. The method is aimed at managing the well-known power-law probability distribution representing the listening behavior of users. This proposal improves the recommendation reliability of collaborative filtering methods while reducing the complexity of the procedures used so far to deal with the gray-sheep problem. | https://arxiv.org/abs/2004.13007v1 | https://arxiv.org/pdf/2004.13007v1.pdf | null | [
"Diego Sánchez-Moreno",
"Vivian F. López Batista",
"M. Dolores Muñoz Vicente",
"Ana B. Gil González",
"María N. Moreno-García"
] | [
"Collaborative Filtering",
"Recommendation Systems"
] | 1,587,772,800,000 | [] | 32,410 |
30,863 | https://paperswithcode.com/paper/delaunay-triangulation-on-skeleton-of-flowers | 1609.01828 | Delaunay Triangulation on Skeleton of Flowers for Classification | In this work, we propose a Triangle based approach to classify flower images.
Initially, flowers are segmented using whorl based region merging segmentation.
Skeleton of a flower is obtained from the segmented flower using a skeleton
pruning method. The Delaunay triangulation is obtained from the endpoints and
junction points detected on the skeleton. The length and angle features are
extracted from the obtained Delaunay triangles and then are aggregated to
represent in the form of interval-valued type data. A suitable classifier has
been explored for the purpose of classification. To corroborate the efficacy of
the proposed method, an experiment is conducted on our own data set of 30
classes of flowers, containing 3000 samples. | http://arxiv.org/abs/1609.01828v1 | http://arxiv.org/pdf/1609.01828v1.pdf | null | [
"Y H Sharath Kumar",
"N. Vinay Kumar",
"D. S. Guru"
] | [
"Classification",
"Classification"
] | 1,473,120,000,000 | [] | 138,353 |
289,684 | https://paperswithcode.com/paper/canonical-variate-dissimilarity-analysis-for | null | Canonical Variate Dissimilarity Analysis for Process Incipient Fault Detection | Early detection of incipient faults in industrial
processes is increasingly becoming important, as these
faults can slowly develop into serious abnormal events,
an emergency situation, or even failure of critical equip-
ment. Multivariate statistical process monitoring methods
are currently established for abrupt fault detection. Among
these, the canonical variate analysis (CVA) was proven to
be effective for dynamic process monitoring. However, the
traditional CVA indices may not be sensitive enough for
incipient faults. In this work, an extension of CVA, called
the canonical variate dissimilarity analysis (CVDA), is pro-
posed for process incipient fault detection in nonlinear dy-
namic processes under varying operating conditions. To
handle the non-Gaussian distributed data, the kernel den-
sity estimation was used for computing detection limits. A
CVA dissimilarity based index has been demonstrated to
outperform traditional CVA indices and other dissimilarity-
based indices, namely the dissimilarity analysis, recursive
dynamic transformed component statistical analysis, and
generalized canonical correlation analysis, in terms of sen-
sitivity when tested on slowly developing multiplicative and
additive faults in a continuous stirred-tank reactor under
closed-loop control and varying operating conditions. | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8304817 | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8304817 | DV 2016 4 | [
"Karl Ezra Salgado Pilarioand Yi Cao"
] | [
"Fault Detection"
] | 1,460,419,200,000 | [] | 121,218 |
228,544 | https://paperswithcode.com/paper/cloud-based-scalable-object-recognition-from | 2106.15329 | Cloud based Scalable Object Recognition from Video Streams using Orientation Fusion and Convolutional Neural Networks | Object recognition from live video streams comes with numerous challenges such as the variation in illumination conditions and poses. Convolutional neural networks (CNNs) have been widely used to perform intelligent visual object recognition. Yet, CNNs still suffer from severe accuracy degradation, particularly on illumination-variant datasets. To address this problem, we propose a new CNN method based on orientation fusion for visual object recognition. The proposed cloud-based video analytics system pioneers the use of bi-dimensional empirical mode decomposition to split a video frame into intrinsic mode functions (IMFs). We further propose these IMFs to endure Reisz transform to produce monogenic object components, which are in turn used for the training of CNNs. Past works have demonstrated how the object orientation component may be used to pursue accuracy levels as high as 93\%. Herein we demonstrate how a feature-fusion strategy of the orientation components leads to further improving visual recognition accuracy to 97\%. We also assess the scalability of our method, looking at both the number and the size of the video streams under scrutiny. We carry out extensive experimentation on the publicly available Yale dataset, including also a self generated video datasets, finding significant improvements (both in accuracy and scale), in comparison to AlexNet, LeNet and SE-ResNeXt, which are the three most commonly used deep learning models for visual object recognition and classification. | https://arxiv.org/abs/2106.15329v1 | https://arxiv.org/pdf/2106.15329v1.pdf | null | [
"Muhammad Usman Yaseen",
"Ashiq Anjum",
"Giancarlo Fortino",
"Antonio Liotta",
"Amir Hussain"
] | [
"Object Recognition"
] | 1,624,060,800,000 | [] | 75,506 |
216,401 | https://paperswithcode.com/paper/domain-adaptive-semantic-segmentation-with | 2104.13613 | Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation | Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distribution shift between source and target domain. Leveraging the supervision from auxiliary tasks~(such as depth estimation) has the potential to heal this shift because many visual tasks are closely related to each other. However, such a supervision is not always available. In this work, we leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domain gap. On the one hand, we propose to explicitly learn the task feature correlation to strengthen the target semantic predictions with the help of target depth estimation. On the other hand, we use the depth prediction discrepancy from source and target depth decoders to approximate the pixel-wise adaptation difficulty. The adaptation difficulty, inferred from depth, is then used to refine the target semantic segmentation pseudo-labels. The proposed method can be easily implemented into existing segmentation frameworks. We demonstrate the effectiveness of our approach on the benchmark tasks SYNTHIA-to-Cityscapes and GTA-to-Cityscapes, on which we achieve the new state-of-the-art performance of $55.0\%$ and $56.6\%$, respectively. Our code is available at \url{https://qin.ee/corda}. | https://arxiv.org/abs/2104.13613v2 | https://arxiv.org/pdf/2104.13613v2.pdf | ICCV 2021 10 | [
"Qin Wang",
"Dengxin Dai",
"Lukas Hoyer",
"Luc van Gool",
"Olga Fink"
] | [
"Depth Estimation",
"Depth Prediction",
"Domain Adaptation",
"Semantic Segmentation",
"Synthetic-to-Real Translation"
] | 1,619,568,000,000 | [] | 84,371 |
275,383 | https://paperswithcode.com/paper/dynamic-virtual-network-embedding-algorithm | 2202.02140 | Dynamic Virtual Network Embedding Algorithm based on Graph Convolution Neural Network and Reinforcement Learning | Network virtualization (NV) is a technology with broad application prospects. Virtual network embedding (VNE) is the core orientation of VN, which aims to provide more flexible underlying physical resource allocation for user function requests. The classical VNE problem is usually solved by heuristic method, but this method often limits the flexibility of the algorithm and ignores the time limit. In addition, the partition autonomy of physical domain and the dynamic characteristics of virtual network request (VNR) also increase the difficulty of VNE. This paper proposed a new type of VNE algorithm, which applied reinforcement learning (RL) and graph neural network (GNN) theory to the algorithm, especially the combination of graph convolutional neural network (GCNN) and RL algorithm. Based on a self-defined fitness matrix and fitness value, we set up the objective function of the algorithm implementation, realized an efficient dynamic VNE algorithm, and effectively reduced the degree of resource fragmentation. Finally, we used comparison algorithms to evaluate the proposed method. Simulation experiments verified that the dynamic VNE algorithm based on RL and GCNN has good basic VNE characteristics. By changing the resource attributes of physical network and virtual network, it can be proved that the algorithm has good flexibility. | https://arxiv.org/abs/2202.02140v1 | https://arxiv.org/pdf/2202.02140v1.pdf | null | [
"Peiying Zhang",
"Chao Wang",
"Neeraj Kumar",
"Weishan Zhang",
"Lei Liu"
] | [
"Network Embedding"
] | 1,643,846,400,000 | [] | 96,541 |
97,024 | https://paperswithcode.com/paper/t-net-parametrizing-fully-convolutional-nets | 1904.02698 | T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor | Recent findings indicate that over-parametrization, while crucial for
successfully training deep neural networks, also introduces large amounts of
redundancy. Tensor methods have the potential to efficiently parametrize
over-complete representations by leveraging this redundancy. In this paper, we
propose to fully parametrize Convolutional Neural Networks (CNNs) with a single
high-order, low-rank tensor. Previous works on network tensorization have
focused on parametrizing individual layers (convolutional or fully connected)
only, and perform the tensorization layer-by-layer separately. In contrast, we
propose to jointly capture the full structure of a neural network by
parametrizing it with a single high-order tensor, the modes of which represent
each of the architectural design parameters of the network (e.g. number of
convolutional blocks, depth, number of stacks, input features, etc). This
parametrization allows to regularize the whole network and drastically reduce
the number of parameters. Our model is end-to-end trainable and the low-rank
structure imposed on the weight tensor acts as an implicit regularization. We
study the case of networks with rich structure, namely Fully Convolutional
Networks (FCNs), which we propose to parametrize with a single 8th-order
tensor. We show that our approach can achieve superior performance with small
compression rates, and attain high compression rates with negligible drop in
accuracy for the challenging task of human pose estimation. | http://arxiv.org/abs/1904.02698v1 | http://arxiv.org/pdf/1904.02698v1.pdf | CVPR 2019 6 | [
"Jean Kossaifi",
"Adrian Bulat",
"Georgios Tzimiropoulos",
"Maja Pantic"
] | [
"Pose Estimation"
] | 1,554,336,000,000 | [] | 4,351 |
271,529 | https://paperswithcode.com/paper/stratified-graph-spectra | 2201.03696 | Stratified Graph Spectra | In classic graph signal processing, given a real-valued graph signal, its graph Fourier transform is typically defined as the series of inner products between the signal and each eigenvector of the graph Laplacian. Unfortunately, this definition is not mathematically valid in the cases of vector-valued graph signals which however are typical operands in the state-of-the-art graph learning modeling and analyses. Seeking a generalized transformation decoding the magnitudes of eigencomponents from vector-valued signals is thus the main objective of this paper. Several attempts are explored, and also it is found that performing the transformation at hierarchical levels of adjacency help profile the spectral characteristics of signals more insightfully. The proposed methods are introduced as a new tool assisting on diagnosing and profiling behaviors of graph learning models. | https://arxiv.org/abs/2201.03696v1 | https://arxiv.org/pdf/2201.03696v1.pdf | null | [
"Fanchao Meng",
"Mark Orr",
"Samarth Swarup"
] | [
"Graph Learning"
] | 1,641,772,800,000 | [] | 168,071 |
3,873 | https://paperswithcode.com/paper/learning-dual-convolutional-neural-networks | 1805.05020 | Learning Dual Convolutional Neural Networks for Low-Level Vision | In this paper, we propose a general dual convolutional neural network
(DualCNN) for low-level vision problems, e.g., super-resolution,
edge-preserving filtering, deraining and dehazing. These problems usually
involve the estimation of two components of the target signals: structures and
details. Motivated by this, our proposed DualCNN consists of two parallel
branches, which respectively recovers the structures and details in an
end-to-end manner. The recovered structures and details can generate the target
signals according to the formation model for each particular application. The
DualCNN is a flexible framework for low-level vision tasks and can be easily
incorporated into existing CNNs. Experimental results show that the DualCNN can
be effectively applied to numerous low-level vision tasks with favorable
performance against the state-of-the-art methods. | http://arxiv.org/abs/1805.05020v1 | http://arxiv.org/pdf/1805.05020v1.pdf | CVPR 2018 6 | [
"Jinshan Pan",
"Sifei Liu",
"Deqing Sun",
"Jiawei Zhang",
"Yang Liu",
"Jimmy Ren",
"Zechao Li",
"Jinhui Tang",
"Huchuan Lu",
"Yu-Wing Tai",
"Ming-Hsuan Yang"
] | [
"Rain Removal",
"Super-Resolution"
] | 1,526,256,000,000 | [] | 165,251 |
214,556 | https://paperswithcode.com/paper/data-generating-process-to-evaluate-causal | 2104.08043 | Data Generating Process to Evaluate Causal Discovery Techniques for Time Series Data | Going beyond correlations, the understanding and identification of causal relationships in observational time series, an important subfield of Causal Discovery, poses a major challenge. The lack of access to a well-defined ground truth for real-world data creates the need to rely on synthetic data for the evaluation of these methods. Existing benchmarks are limited in their scope, as they either are restricted to a "static" selection of data sets, or do not allow for a granular assessment of the methods' performance when commonly made assumptions are violated. We propose a flexible and simple to use framework for generating time series data, which is aimed at developing, evaluating, and benchmarking time series causal discovery methods. In particular, the framework can be used to fine tune novel methods on vast amounts of data, without "overfitting" them to a benchmark, but rather so they perform well in real-world use cases. Using our framework, we evaluate prominent time series causal discovery methods and demonstrate a notable degradation in performance when their assumptions are invalidated and their sensitivity to choice of hyperparameters. Finally, we propose future research directions and how our framework can support both researchers and practitioners. | https://arxiv.org/abs/2104.08043v1 | https://arxiv.org/pdf/2104.08043v1.pdf | null | [
"Andrew R. Lawrence",
"Marcus Kaiser",
"Rui Sampaio",
"Maksim Sipos"
] | [
"Causal Discovery",
"Time Series"
] | 1,618,531,200,000 | [] | 171,544 |
172,538 | https://paperswithcode.com/paper/embedding-meta-textual-information-for | 2010.16313 | Embedding Meta-Textual Information for Improved Learning to Rank | Neural approaches to learning term embeddings have led to improved computation of similarity and ranking in information retrieval (IR). So far neural representation learning has not been extended to meta-textual information that is readily available for many IR tasks, for example, patent classes in prior-art retrieval, topical information in Wikipedia articles, or product categories in e-commerce data. We present a framework that learns embeddings for meta-textual categories, and optimizes a pairwise ranking objective for improved matching based on combined embeddings of textual and meta-textual information. We show considerable gains in an experimental evaluation on cross-lingual retrieval in the Wikipedia domain for three language pairs, and in the Patent domain for one language pair. Our results emphasize that the mode of combining different types of information is crucial for model improvement. | https://arxiv.org/abs/2010.16313v1 | https://arxiv.org/pdf/2010.16313v1.pdf | COLING 2020 8 | [
"Toshitaka Kuwa",
"Shigehiko Schamoni",
"Stefan Riezler"
] | [
"Information Retrieval",
"Learning-To-Rank",
"Representation Learning"
] | 1,604,016,000,000 | [] | 41,975 |
19,660 | https://paperswithcode.com/paper/learning-to-teach-reinforcement-learning | 1707.09079 | Learning to Teach Reinforcement Learning Agents | In this article we study the transfer learning model of action advice under a
budget. We focus on reinforcement learning teachers providing action advice to
heterogeneous students playing the game of Pac-Man under a limited advice
budget. First, we examine several critical factors affecting advice quality in
this setting, such as the average performance of the teacher, its variance and
the importance of reward discounting in advising. The experiments show the
non-trivial importance of the coefficient of variation (CV) as a statistic for
choosing policies that generate advice. The CV statistic relates variance to
the corresponding mean. Second, the article studies policy learning for
distributing advice under a budget. Whereas most methods in the relevant
literature rely on heuristics for advice distribution we formulate the problem
as a learning one and propose a novel RL algorithm capable of learning when to
advise, adapting to the student and the task at hand. Furthermore, we argue
that learning to advise under a budget is an instance of a more generic
learning problem: Constrained Exploitation Reinforcement Learning. | http://arxiv.org/abs/1707.09079v1 | http://arxiv.org/pdf/1707.09079v1.pdf | null | [
"Anestis Fachantidis",
"Matthew E. Taylor",
"Ioannis Vlahavas"
] | [
"reinforcement-learning",
"Transfer Learning"
] | 1,501,200,000,000 | [] | 121,366 |
285,972 | https://paperswithcode.com/paper/explainable-artificial-intelligence-in | 2203.16073 | Explainable Predictive Process Monitoring: Evaluation Metrics and Guidelines for Process Outcome Prediction | Although a recent shift has been made in the field of predictive process monitoring to use models from the explainable artificial intelligence field, the evaluation still occurs mainly through performance-based metrics, thus not accounting for the actionability and implications of the explanations. In this paper, we define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction. The introduced properties are analysed along the event, case, and control flow perspective which are typical for a process-based analysis. This allows comparing inherently created explanations with post-hoc explanations. We benchmark seven classifiers on thirteen real-life events logs, and these cover a range of transparent and non-transparent machine learning and deep learning models, further complemented with explainability techniques. Next, this paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications, by providing insight into how the varying preprocessing, model complexity and explainability techniques typical in process outcome prediction influence the explainability of the model. | https://arxiv.org/abs/2203.16073v3 | https://arxiv.org/pdf/2203.16073v3.pdf | null | [
"Alexander Stevens",
"Johannes De Smedt"
] | [
"Explainable artificial intelligence",
"Predictive Process Monitoring"
] | 1,648,598,400,000 | [
{
"code_snippet_url": null,
"description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.\r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\r\n\r\nImage: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)",
"full_name": "Logistic Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Logistic Regression",
"source_title": null,
"source_url": null
}
] | 60,376 |
104,339 | https://paperswithcode.com/paper/abaruah-at-semeval-2019-task-5-bi-directional | null | ABARUAH at SemEval-2019 Task 5 : Bi-directional LSTM for Hate Speech Detection | In this paper, we present the results obtained using bi-directional long short-term memory (BiLSTM) with and without attention and Logistic Regression (LR) models for SemEval-2019 Task 5 titled {''}HatEval: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter{''}. This paper presents the results obtained for Subtask A for English language. The results of the BiLSTM and LR models are compared for two different types of preprocessing. One with no stemming performed and no stopwords removed. The other with stemming performed and stopwords removed. The BiLSTM model without attention performed the best for the first test, while the LR model with character n-grams performed the best for the second test. The BiLSTM model obtained an F1 score of 0.51 on the test set and obtained an official ranking of 8/71. | https://aclanthology.org/S19-2065 | https://aclanthology.org/S19-2065.pdf | SEMEVAL 2019 6 | [
"Arup Baruah",
"Ferdous Barbhuiya",
"Kuntal Dey"
] | [
"Hate Speech Detection"
] | 1,559,347,200,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.\r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\r\n\r\nImage: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)",
"full_name": "Logistic Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Logistic Regression",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Bidirectional LSTM**, or **biLSTM**, is a sequence processing model that consists of two LSTMs: one taking the input in a forward direction, and the other in a backwards direction. BiLSTMs effectively increase the amount of information available to the network, improving the context available to the algorithm (e.g. knowing what words immediately follow *and* precede a word in a sentence).\r\n\r\nImage Source: Modelling Radiological Language with Bidirectional Long Short-Term Memory Networks, Cornegruta et al",
"full_name": "Bidirectional LSTM",
"introduced_year": 2001,
"main_collection": {
"area": "General",
"description": "Consists of tabular data learning approaches that use deep learning architectures for learning on tabular data. According to the taxonomy in [V.Borisov et al. (2021)](https://paperswithcode.com/paper/deep-neural-networks-and-tabular-data-a), deep learning approaches for tabular data can be categorized into:\r\n\r\n- **Regularization models**\r\n- **Transformer-based models**: [TabNet](/method/tabnet), [TabTransformer](/method/tabtransformer), [SAINT](/method/saint), [ARM-Net](/method/arm-net),...\r\n- **Hybrid models** (fully differentiable and partly differentiable): [Wide&Deep](/method/wide-deep), [TabNN](/method/tabnn), [NON](/method/non), [Boost-GNN](/method/boost-gnn), [NODE](/method/node),...\r\n- **Data encoding methods** (single-dimensional encoding and multi-dimensional encoding): [VIME](/method/vime), [SCARF](/method/scarf),...",
"name": "Deep Tabular Learning",
"parent": null
},
"name": "BiLSTM",
"source_title": null,
"source_url": null
}
] | 53,937 |
4,209 | https://paperswithcode.com/paper/response-ranking-with-deep-matching-networks | 1805.00188 | Response Ranking with Deep Matching Networks and External Knowledge in Information-seeking Conversation Systems | Intelligent personal assistant systems with either text-based or voice-based
conversational interfaces are becoming increasingly popular around the world.
Retrieval-based conversation models have the advantages of returning fluent and
informative responses. Most existing studies in this area are on open domain
"chit-chat" conversations or task / transaction oriented conversations. More
research is needed for information-seeking conversations. There is also a lack
of modeling external knowledge beyond the dialog utterances among current
conversational models. In this paper, we propose a learning framework on the
top of deep neural matching networks that leverages external knowledge for
response ranking in information-seeking conversation systems. We incorporate
external knowledge into deep neural models with pseudo-relevance feedback and
QA correspondence knowledge distillation. Extensive experiments with three
information-seeking conversation data sets including both open benchmarks and
commercial data show that, our methods outperform various baseline methods
including several deep text matching models and the state-of-the-art method on
response selection in multi-turn conversations. We also perform analysis over
different response types, model variations and ranking examples. Our models and
research findings provide new insights on how to utilize external knowledge
with deep neural models for response selection and have implications for the
design of the next generation of information-seeking conversation systems. | http://arxiv.org/abs/1805.00188v3 | http://arxiv.org/pdf/1805.00188v3.pdf | null | [
"Liu Yang",
"Minghui Qiu",
"Chen Qu",
"Jiafeng Guo",
"Yongfeng Zhang",
"W. Bruce Croft",
"Jun Huang",
"Haiqing Chen"
] | [
"Knowledge Distillation",
"Text Matching"
] | 1,525,132,800,000 | [] | 83,946 |
269,494 | https://paperswithcode.com/paper/deconfounded-training-for-graph-neural | 2112.15089 | Causal Attention for Interpretable and Generalizable Graph Classification | In graph classification, attention and pooling-based graph neural networks (GNNs) prevail to extract the critical features from the input graph and support the prediction. They mostly follow the paradigm of learning to attend, which maximizes the mutual information between the attended graph and the ground-truth label. However, this paradigm makes GNN classifiers recklessly absorb all the statistical correlations between input features and labels in the training data, without distinguishing the causal and noncausal effects of features. Instead of underscoring the causal features, the attended graphs are prone to visit the noncausal features as the shortcut to predictions. Such shortcut features might easily change outside the training distribution, thereby making the GNN classifiers suffer from poor generalization. In this work, we take a causal look at the GNN modeling for graph classification. With our causal assumption, the shortcut feature serves as a confounder between the causal feature and prediction. It tricks the classifier to learn spurious correlations that facilitate the prediction in in-distribution (ID) test evaluation, while causing the performance drop in out-of-distribution (OOD) test data. To endow the classifier with better interpretation and generalization, we propose the Causal Attention Learning (CAL) strategy, which discovers the causal patterns and mitigates the confounding effect of shortcuts. Specifically, we employ attention modules to estimate the causal and shortcut features of the input graph. We then parameterize the backdoor adjustment of causal theory -- combine each causal feature with various shortcut features. It encourages the stable relationships between the causal estimation and prediction, regardless of the changes in shortcut parts and distributions. Extensive experiments on synthetic and real-world datasets demonstrate the effectiveness of CAL. | https://arxiv.org/abs/2112.15089v2 | https://arxiv.org/pdf/2112.15089v2.pdf | null | [
"Yongduo Sui",
"Xiang Wang",
"Jiancan Wu",
"Min Lin",
"Xiangnan He",
"Tat-Seng Chua"
] | [
"Classification",
"Graph Attention",
"Graph Classification"
] | 1,640,822,400,000 | [] | 108,996 |
14,677 | https://paperswithcode.com/paper/remedies-against-the-vocabulary-gap-in | 1711.06004 | Remedies against the Vocabulary Gap in Information Retrieval | Search engines rely heavily on term-based approaches that represent queries
and documents as bags of words. Text---a document or a query---is represented
by a bag of its words that ignores grammar and word order, but retains word
frequency counts. When presented with a search query, the engine then ranks
documents according to their relevance scores by computing, among other things,
the matching degrees between query and document terms. While term-based
approaches are intuitive and effective in practice, they are based on the
hypothesis that documents that exactly contain the query terms are highly
relevant regardless of query semantics. Inversely, term-based approaches assume
documents that do not contain query terms as irrelevant. However, it is known
that a high matching degree at the term level does not necessarily mean high
relevance and, vice versa, documents that match null query terms may still be
relevant. Consequently, there exists a vocabulary gap between queries and
documents that occurs when both use different words to describe the same
concepts. It is the alleviation of the effect brought forward by this
vocabulary gap that is the topic of this dissertation. More specifically, we
propose (1) methods to formulate an effective query from complex textual
structures and (2) latent vector space models that circumvent the vocabulary
gap in information retrieval. | http://arxiv.org/abs/1711.06004v1 | http://arxiv.org/pdf/1711.06004v1.pdf | null | [
"Christophe Van Gysel"
] | [
"Information Retrieval"
] | 1,510,790,400,000 | [] | 151,688 |
170,563 | https://paperswithcode.com/paper/pushing-the-limits-of-semi-supervised | 2010.10504 | Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition | We employ a combination of recent developments in semi-supervised learning for automatic speech recognition to obtain state-of-the-art results on LibriSpeech utilizing the unlabeled audio of the Libri-Light dataset. More precisely, we carry out noisy student training with SpecAugment using giant Conformer models pre-trained using wav2vec 2.0 pre-training. By doing so, we are able to achieve word-error-rates (WERs) 1.4%/2.6% on the LibriSpeech test/test-other sets against the current state-of-the-art WERs 1.7%/3.3%. | https://arxiv.org/abs/2010.10504v2 | https://arxiv.org/pdf/2010.10504v2.pdf | null | [
"Yu Zhang",
"James Qin",
"Daniel S. Park",
"Wei Han",
"Chung-Cheng Chiu",
"Ruoming Pang",
"Quoc V. Le",
"Yonghui Wu"
] | [
"Automatic Speech Recognition",
"Speech Recognition",
"Speech Recognition"
] | 1,603,152,000,000 | [
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/69d63f99a4929d61a98e0975a4ab45b554835b9e/gluon/gluoncv2/models/resdropresnet_cifar.py#L17",
"description": "**Stochastic Depth** aims to shrink the depth of a network during training, while\r\nkeeping it unchanged during testing. This is achieved by randomly dropping entire [ResBlocks](https://paperswithcode.com/method/residual-block) during training and bypassing their transformations through skip connections. \r\n\r\nLet $b\\_{l} \\in$ {$0, 1$} denote a Bernoulli random variable, which indicates whether the $l$th ResBlock is active ($b\\_{l} = 1$) or inactive ($b\\_{l} = 0$). Further, let us denote the “survival” probability of ResBlock $l$ as $p\\_{l} = \\text{Pr}\\left(b\\_{l} = 1\\right)$. With this definition we can bypass the $l$th ResBlock by multiplying its function $f\\_{l}$ with $b\\_{l}$ and we extend the update rule to:\r\n\r\n$$ H\\_{l} = \\text{ReLU}\\left(b\\_{l}f\\_{l}\\left(H\\_{l-1}\\right) + \\text{id}\\left(H\\_{l-1}\\right)\\right) $$\r\n\r\nIf $b\\_{l} = 1$, this reduces to the original [ResNet](https://paperswithcode.com/method/resnet) update and this ResBlock remains unchanged. If $b\\_{l} = 0$, the ResBlock reduces to the identity function, $H\\_{l} = \\text{id}\\left((H\\_{l}−1\\right)$.",
"full_name": "Stochastic Depth",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Stochastic Depth",
"source_title": "Deep Networks with Stochastic Depth",
"source_url": "http://arxiv.org/abs/1603.09382v3"
},
{
"code_snippet_url": "https://github.com/ildoonet/pytorch-randaugment",
"description": "**RandAugment** is an automated data augmentation method. The search space for data augmentation has 2 interpretable hyperparameter $N$ and $M$. $N$ is the number of augmentation transformations to apply sequentially, and $M$ is the magnitude for all the transformations. To reduce the parameter space but still maintain image diversity, learned policies and probabilities for applying each transformation are replaced with a parameter-free procedure of always selecting a transformation with uniform probability $\\frac{1}{K}$. Here $K$ is the number of transformation options. So given $N$ transformations for a training image, RandAugment may thus express $KN$ potential policies.\r\n\r\nTransformations applied include identity transformation, autoContrast, equalize, rotation, solarixation, colorjittering, posterizing, changing contrast, changing brightness, changing sharpness, shear-x, shear-y, translate-x, translate-y.",
"full_name": "RandAugment",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.",
"name": "Image Data Augmentation",
"parent": null
},
"name": "RandAugment",
"source_title": "RandAugment: Practical automated data augmentation with a reduced search space",
"source_url": "https://arxiv.org/abs/1909.13719v2"
},
{
"code_snippet_url": "https://github.com/google-research/noisystudent",
"description": "**Noisy Student Training** is a semi-supervised learning approach. It extends the idea of self-training\r\nand distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps: \r\n\r\n1. train a teacher model on labeled images\r\n2. use the teacher to generate pseudo labels on unlabeled images\r\n3. train a student model on the combination of labeled images and pseudo labeled images. \r\n\r\nThe algorithm is iterated a few times by treating the student as a teacher to relabel the unlabeled data and training a new student.\r\n\r\nNoisy Student Training seeks to improve on self-training and distillation in two ways. First, it makes the student larger than, or at least equal to, the teacher so the student can better learn from a larger dataset. Second, it adds noise to the student so the noised student is forced to learn harder from the pseudo labels. To noise the student, it uses input noise such as [RandAugment](https://paperswithcode.com/method/randaugment) data augmentation, and model noise such as [dropout](https://paperswithcode.com/method/dropout) and [stochastic depth](https://paperswithcode.com/method/stochastic-depth) during training.",
"full_name": "Noisy Student",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Semi-Supervised Learning** methods leverage unlabelled data as well as labelled data to increase performance on machine learning tasks. Below you can find a continuously updating list of semi-supervised learning methods (this may have overlap with self-supervised methods due to evaluation protocol similarity).\r\n\r\n",
"name": "Semi-Supervised Learning Methods",
"parent": null
},
"name": "Noisy Student",
"source_title": "Self-training with Noisy Student improves ImageNet classification",
"source_url": "https://arxiv.org/abs/1911.04252v4"
}
] | 43,953 |
252,096 | https://paperswithcode.com/paper/transductive-data-augmentation-with | 2111.00974 | Transductive Data Augmentation with Relational Path Rule Mining for Knowledge Graph Embedding | For knowledge graph completion, two major types of prediction models exist: one based on graph embeddings, and the other based on relation path rule induction. They have different advantages and disadvantages. To take advantage of both types, hybrid models have been proposed recently. One of the hybrid models, UniKER, alternately augments training data by relation path rules and trains an embedding model. Despite its high prediction accuracy, it does not take full advantage of relation path rules, as it disregards low-confidence rules in order to maintain the quality of augmented data. To mitigate this limitation, we propose transductive data augmentation by relation path rules and confidence-based weighting of augmented data. The results and analysis show that our proposed method effectively improves the performance of the embedding model by augmenting data that include true answers or entities similar to them. | https://arxiv.org/abs/2111.00974v1 | https://arxiv.org/pdf/2111.00974v1.pdf | null | [
"Yushi Hirose",
"Masashi Shimbo",
"Taro Watanabe"
] | [
"Data Augmentation",
"Graph Embedding",
"Knowledge Graph Completion",
"Knowledge Graph Embedding"
] | 1,635,724,800,000 | [] | 139,170 |
166,499 | https://paperswithcode.com/paper/trace-tensorizing-and-generalizing-supernets | null | TRACE: Tensorizing and Generalizing Supernets from Neural Architecture Search | Recently, a special kind of graph, i.e., supernet, which allows two nodes connected by multi-choice edges, has exhibited its power in neural architecture search (NAS) by searching better architectures for computer vision (CV) and natural language processing (NLP) tasks. In this paper, we discover that the design of such discrete architectures also appears in many other important learning tasks, e.g., logic chain inference in knowledge graphs (KGs) and meta-path discovery in heterogeneous information networks (HINs). Thus, we are motivated to generalize the supernet search problem on a broader horizon. However, none of the existing works are effective since the supernet's topology is highly task-dependent and diverse. To address this issue, we propose to tensorize the supernet, i.e., unify the above problems by a tensor formulation and encode the topology inside the supernet by a tensor network. We further propose an efficient algorithm that admits both stochastic and deterministic objectives to solve the search problem. Finally, we perform extensive experiments on diverse learning tasks, i.e., architecture design for CV, logic inference for KG, and meta-path discovery for HIN. Empirical results demonstrate that our method leads to better performance and architectures. | https://openreview.net/forum?id=Oi-Kh379U0 | https://openreview.net/pdf?id=Oi-Kh379U0 | null | [
"Hansi Yang",
"Quanming Yao"
] | [
"Knowledge Graphs",
"Neural Architecture Search"
] | 1,609,459,200,000 | [] | 86,039 |
293,312 | https://paperswithcode.com/paper/on-distributed-adaptive-optimization-with-1 | 2205.05632 | On Distributed Adaptive Optimization with Gradient Compression | We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMSGrad algorithm. Gradient compression with error feedback is applied to reduce the communication cost in the gradient transmission process. Our convergence analysis of COMP-AMS shows that such compressed gradient averaging strategy yields same convergence rate as standard AMSGrad, and also exhibits the linear speedup effect w.r.t. the number of local workers. Compared with recently proposed protocols on distributed adaptive methods, COMP-AMS is simple and convenient. Numerical experiments are conducted to justify the theoretical findings, and demonstrate that the proposed method can achieve same test accuracy as the full-gradient AMSGrad with substantial communication savings. With its simplicity and efficiency, COMP-AMS can serve as a useful distributed training framework for adaptive gradient methods. | https://arxiv.org/abs/2205.05632v1 | https://arxiv.org/pdf/2205.05632v1.pdf | ICLR 2022 4 | [
"Xiaoyun Li",
"Belhal Karimi",
"Ping Li"
] | [
"Distributed Optimization"
] | 1,652,227,200,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**AMSGrad** is a stochastic optimization method that seeks to fix a convergence issue with [Adam](https://paperswithcode.com/method/adam) based optimizers. AMSGrad uses the maximum of past squared gradients \r\n$v\\_{t}$ rather than the exponential average to update the parameters:\r\n\r\n$$m\\_{t} = \\beta\\_{1}m\\_{t-1} + \\left(1-\\beta\\_{1}\\right)g\\_{t} $$\r\n\r\n$$v\\_{t} = \\beta\\_{2}v\\_{t-1} + \\left(1-\\beta\\_{2}\\right)g\\_{t}^{2}$$\r\n\r\n$$ \\hat{v}\\_{t} = \\max\\left(\\hat{v}\\_{t-1}, v\\_{t}\\right) $$\r\n\r\n$$\\theta\\_{t+1} = \\theta\\_{t} - \\frac{\\eta}{\\sqrt{\\hat{v}_{t}} + \\epsilon}m\\_{t}$$",
"full_name": "AMSGrad",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "AMSGrad",
"source_title": "On the Convergence of Adam and Beyond",
"source_url": "http://arxiv.org/abs/1904.09237v1"
}
] | 13,292 |
172,745 | https://paperswithcode.com/paper/diverse-image-captioning-with-context-object | 2011.00966 | Diverse Image Captioning with Context-Object Split Latent Spaces | Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, e.g. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset. Our framework not only enables diverse captioning through context-based pseudo supervision, but extends this to images with novel objects and without paired captions in the training data. We evaluate our COS-CVAE approach on the standard COCO dataset and on the held-out COCO dataset consisting of images with novel objects, showing significant gains in accuracy and diversity. | https://arxiv.org/abs/2011.00966v1 | https://arxiv.org/pdf/2011.00966v1.pdf | NeurIPS 2020 12 | [
"Shweta Mahajan",
"Stefan Roth"
] | [
"Image Captioning"
] | 1,604,275,200,000 | [] | 45,245 |
74,229 | https://paperswithcode.com/paper/model-level-dual-learning | null | Model-Level Dual Learning |
Many artificial intelligence tasks appear in dual forms like English$\leftrightarrow$French translation and speech$\leftrightarrow$text transformation. Existing dual learning schemes, which are proposed to solve a pair of such dual tasks, explore how to leverage such dualities from data level. In this work, we propose a new learning framework, model-level dual learning, which takes duality of tasks into consideration while designing the architectures for the primal/dual models, and ties the model parameters that playing similar roles in the two tasks. We study both symmetric and asymmetric model-level dual learning. Our algorithms achieve significant improvements on neural machine translation and sentiment analysis.
| https://icml.cc/Conferences/2018/Schedule?showEvent=2172 | http://proceedings.mlr.press/v80/xia18a/xia18a.pdf | ICML 2018 7 | [
"Yingce Xia",
"Xu Tan",
"Fei Tian",
"Tao Qin",
"Nenghai Yu",
"Tie-Yan Liu"
] | [
"Machine Translation",
"Sentiment Analysis"
] | 1,530,403,200,000 | [] | 171,295 |