prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Instance-Level Meta Normalization, provide a description of the model
**Instance-Level Meta Normalization** is a normalization method that addresses a learning-to-normalize problem. ILM-Norm learns to predict the normalization parameters via both the feature feed-forward and the gradient back-propagation paths. It uses an auto-encoder to predict the weights $\omega$ and bias $\beta$ as the rescaling parameters for recovering the distribution of the tensor $x$ of feature maps. Instead of using the entire feature tensor $x$ as the input for the auto-encoder, it uses the mean $\mu$ and variance $\gamma$ of $x$ for characterizing its statistics.
Given the following machine learning model name: Feature-Aligned Person Search Network, provide a description of the model
**AlignPS**, or **Feature-Aligned Person Search Network**, is an anchor-free framework for efficient person search. The model employs the typical architecture of an anchor-free detection model (i.e., [FCOS](https://paperswithcode.com/method/fcos)). An aligned feature aggregation (AFA) module is designed to make the model focus more on the re-id subtask. Specifically, AFA reshapes some building blocks of [FPN](https://paperswithcode.com/method/fpn) to overcome the issues of region and scale misalignment in re-id feature learning. A [deformable convolution](https://paperswithcode.com/method/deformable-convolution) is exploited to make the re-id embeddings adaptively aligned with the foreground regions. A feature fusion scheme is designed to better aggregate features from different FPN levels, which makes the re-id features more robust to scale variations. The training procedures of re-id and detection are also optimized to place more emphasis on generating robust re-id embeddings.
Given the following machine learning model name: Big-Little Module, provide a description of the model
**Big-Little Modules** are blocks for image models that have two branches: each of which represents a separate block from a deep model and a less deep counterpart. They were proposed as part of the [BigLittle-Net](https://paperswithcode.com/method/big-little-net) architecture. The two branches are fused with a linear combination and unit weights. These two branches are known as Big-Branch (more layers and channels at low resolutions) and Little-Branch (fewer layers and channels at high resolution).
Given the following machine learning model name: Rung Kutta optimization, provide a description of the model
The optimization field suffers from the metaphor-based “pseudo-novel” or “fancy” optimizers. Most of these cliché methods mimic animals' searching trends and possess a small contribution to the optimization process itself. Most of these cliché methods suffer from the locally efficient performance, biased verification methods on easy problems, and high similarity between their components' interactions. This study attempts to go beyond the traps of metaphors and introduce a novel metaphor-free population-based optimization method based on the mathematical foundations and ideas of the Runge Kutta (RK) method widely well-known in mathematics. The proposed RUNge Kutta optimizer (RUN) was developed to deal with various types of optimization problems in the future. The RUN utilizes the logic of slope variations computed by the RK method as a promising and logical searching mechanism for global optimization. This search mechanism benefits from two active exploration and exploitation phases for exploring the promising regions in the feature space and constructive movement toward the global best solution. Furthermore, an enhanced solution quality (ESQ) mechanism is employed to avoid the local optimal solutions and increase convergence speed. The RUN algorithm's efficiency was evaluated by comparing with other metaheuristic algorithms in 50 mathematical test functions and four real-world engineering problems. The RUN provided very promising and competitive results, showing superior exploration and exploitation tendencies, fast convergence rate, and local optima avoidance. In optimizing the constrained engineering problems, the metaphor-free RUN demonstrated its suitable performance as well. The authors invite the community for extensive evaluations of this deep-rooted optimizer as a promising tool for real-world optimization
Given the following machine learning model name: Autoencoders, provide a description of the model
An **autoencoder** is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Extracted from: [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder) Image source: [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)
Given the following machine learning model name: Sparse Layer-wise Adaptive Moments optimizer for large Batch training, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Generalized State-Dependent Exploration, provide a description of the model
**Generalized State-Dependent Exploration**, or **gSDE**, is an exploration method for reinforcement learning that uses more general features and re-sampling the noise periodically. State-Dependent Exploration (SDE) is an intermediate solution for exploration that consists in adding noise as a function of the state $s\_{t}$, to the deterministic action $\mu\left(\mathbf{s}\_{t}\right)$. At the beginning of an episode, the parameters $\theta\_{\epsilon}$ of that exploration function are drawn from a Gaussian distribution. The resulting action $\mathbf{a}\_{t}$ is as follows: $$ \mathbf{a}\_{t}=\mu\left(\mathbf{s}\_{t} ; \theta\_{\mu}\right)+\epsilon\left(\mathbf{s}\_{t} ; \theta\_{\epsilon}\right), \quad \theta\_{\epsilon} \sim \mathcal{N}\left(0, \sigma^{2}\right) $$ This episode-based exploration is smoother and more consistent than the unstructured step-based exploration. Thus, during one episode, instead of oscillating around a mean value, the action a for a given state $s$ will be the same. In the case of a linear exploration function $\epsilon\left(\mathbf{s} ; \theta\_{\epsilon}\right)=\theta\_{\epsilon} \mathbf{s}$, by operation on Gaussian distributions, Rückstieß et al. show that the action element $\mathbf{a}\_{j}$ is normally distributed: $$ \pi]_{j}\left(\mathbf{a}\_{j} \mid \mathbf{s}\right) \sim \mathcal{N}\left(\mu\_{j}(\mathbf{s}), \hat{\sigma\_{j}}^{2}\right) $$ where $\hat{\sigma}$ is a diagonal matrix with elements $\hat{\sigma}\_{j}=\sqrt{\sum\_{i}\left(\sigma\_{i j} \mathbf{s}\_{i}\right)^{2}}$. Because we know the policy distribution, we can obtain the derivative of the log-likelihood $\log \pi(\mathbf{a} \mid \mathbf{s})$ with respect to the variance $\sigma$ : $$ \frac{\partial \log \pi(\mathbf{a} \mid \mathbf{s})}{\partial \sigma_{i j}}=\frac{\left(\mathbf{a}\_{j}-\mu\_{j}\right)^{2}-\hat{\sigma\_{j}}^{2}}{\hat{\sigma}\_{j}^{3}} \frac{\mathbf{s}\_{i}^{2} \sigma\_{i j}}{\hat{\sigma_{j}}} $$ This can be easily plugged into the likelihood ratio gradient estimator, which allows to adapt $\sigma$ during training. SDE is therefore compatible with standard policy gradient methods, while addressing most shortcomings of the unstructured exploration. For gSDE, two improvements are suggested: 1. We sample the parameters $\theta\_{\epsilon}$ of the exploration function every $n$ steps instead of every episode. 2. Instead of the state s, we can in fact use any features. We chose policy features $\mathbf{z}\_{\mu}\left(\mathbf{s} ; \theta\_{\mathbf{z}\_{\mu}}\right)$ (last layer before the deterministic output $\left.\mu(\mathbf{s})=\theta\_{\mu} \mathbf{z}\_{\mu}\left(\mathbf{s} ; \theta_{\mathbf{z}\_{\mu}}\right)\right)$ as input to the noise function $\epsilon\left(\mathbf{s} ; \theta\_{\epsilon}\right)=\theta\_{\epsilon} \mathbf{z}\_{\mu}(\mathbf{s})$
Given the following machine learning model name: Inpainting, provide a description of the model
Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.
Given the following machine learning model name: Convolutional GRU, provide a description of the model
A **Convolutional Gated Recurrent Unit** is a type of [GRU](https://paperswithcode.com/method/gru) that combines GRUs with the [convolution](https://paperswithcode.com/method/convolution) operation. The update rule for input $x\_{t}$ and the previous output $h\_{t-1}$ is given by the following: $$ r = \sigma\left(W\_{r} \star\_{n}\left[h\_{t-1};x\_{t}\right] + b\_{r}\right) $$ $$ u = \sigma\left(W\_{u} \star\_{n}\left[h\_{t-1};x\_{t}\right] + b\_{u} \right) $$ $$ c = \rho\left(W\_{c} \star\_{n}\left[x\_{t}; r \odot h\_{t-1}\right] + b\_{c} \right) $$ $$ h\_{t} = u \odot h\_{t-1} + \left(1-u\right) \odot c $$ In these equations $\sigma$ and $\rho$ are the elementwise sigmoid and [ReLU](https://paperswithcode.com/method/relu) functions respectively and the $\star\_{n}$ represents a convolution with a kernel of size $n \times n$. Brackets are used to represent a feature concatenation.
Given the following machine learning model name: Gradient Harmonizing Mechanism C, provide a description of the model
**GHM-C** is a loss function designed to balance the gradient flow for anchor classification. The GHM first performs statistics on the number of examples with similar attributes w.r.t their gradient density and then attaches a harmonizing parameter to the gradient of each example according to the density. The modification of gradient can be equivalently implemented by reformulating the loss function. Embedding the GHM into the classification loss is denoted as GHM-C loss. Since the gradient density is a statistical variable depending on the examples distribution in a mini-batch, GHM-C is a dynamic loss that can adapt to the change of data distribution in each batch as well as to the updating of the model.
Given the following machine learning model name: CBHG, provide a description of the model
**CBHG** is a building block used in the [Tacotron](https://paperswithcode.com/method/tacotron) text-to-speech model. It consists of a bank of 1-D convolutional filters, followed by highway networks and a bidirectional gated recurrent unit ([BiGRU](https://paperswithcode.com/method/bigru)). The module is used to extract representations from sequences. The input sequence is first convolved with $K$ sets of 1-D convolutional filters, where the $k$-th set contains $C\_{k}$ filters of width $k$ (i.e. $k = 1, 2, \dots , K$). These filters explicitly model local and contextual information (akin to modeling unigrams, bigrams, up to K-grams). The [convolution](https://paperswithcode.com/method/convolution) outputs are stacked together and further max pooled along time to increase local invariances. A stride of 1 is used to preserve the original time resolution. The processed sequence is further passed to a few fixed-width 1-D convolutions, whose outputs are added with the original input sequence via residual connections. [Batch normalization](https://paperswithcode.com/method/batch-normalization) is used for all convolutional layers. The convolution outputs are fed into a multi-layer [highway network](https://paperswithcode.com/method/highway-network) to extract high-level features. Finally, a bidirectional [GRU](https://paperswithcode.com/method/gru) RNN is stacked on top to extract sequential features from both forward and backward context.
Given the following machine learning model name: BinaryBERT, provide a description of the model
**BinaryBERT** is a [BERT](https://paperswithcode.com/method/bert)-variant that applies quantization in the form of weight binarization. Specifically, ternary weight splitting is proposed which initializes BinaryBERT by equivalently splitting from a half-sized ternary network. To obtain BinaryBERT, we first train a half-sized [ternary BERT](https://paperswithcode.com/method/ternarybert) model, and then apply a [ternary weight splitting](https://paperswithcode.com/method/ternary-weight-splitting) operator to obtain the latent full-precision and quantized weights as the initialization of the full-sized BinaryBERT. We then fine-tune BinaryBERT for further refinement.
Given the following machine learning model name: Big-Little Net, provide a description of the model
**Big-Little Net** is a convolutional neural network architecture for learning multi-scale feature representations. This is achieved by using a multi-branch network, which has different computational complexity at different branches with different resolutions. Through frequent merging of features from branches at distinct scales, the model obtains multi-scale features while using less computation. It consists of Big-Little Modules, which have two branches: each of which represents a separate block from a deep model and a less deep counterpart. The two branches are fused with linear combination + unit weights. These two branches are known as Big-Branch (more layers and channels at low resolutions) and Little-Branch (fewer layers and channels at high resolution).
Given the following machine learning model name: DeCLUTR, provide a description of the model
**DeCLUTR** is an approach for learning universal sentence embeddings that utilizes a self-supervised objective that does not require labelled training data. The objective learns universal sentence embeddings by training an encoder to minimize the distance between the embeddings of textual segments randomly sampled from nearby in the same document.
Given the following machine learning model name: Highway Layer, provide a description of the model
A **Highway Layer** contains an information highway to other layers that helps with information flow. It is characterised by the use of a gating unit to help this information flow. A plain feedforward neural network typically consists of $L$ layers where the $l$th layer ($l \in ${$1, 2, \dots, L$}) applies a nonlinear transform $H$ (parameterized by $\mathbf{W\_{H,l}}$) on its input $\mathbf{x\_{l}}$ to produce its output $\mathbf{y\_{l}}$. Thus, $\mathbf{x\_{1}}$ is the input to the network and $\mathbf{y\_{L}}$ is the network’s output. Omitting the layer index and biases for clarity, $$ \mathbf{y} = H\left(\mathbf{x},\mathbf{W\_{H}}\right) $$ $H$ is usually an affine transform followed by a non-linear activation function, but in general it may take other forms. For a [highway network](https://paperswithcode.com/method/highway-network), we additionally define two nonlinear transforms $T\left(\mathbf{x},\mathbf{W\_{T}}\right)$ and $C\left(\mathbf{x},\mathbf{W\_{C}}\right)$ such that: $$ \mathbf{y} = H\left(\mathbf{x},\mathbf{W\_{H}}\right)·T\left(\mathbf{x},\mathbf{W\_{T}}\right) + \mathbf{x}·C\left(\mathbf{x},\mathbf{W\_{C}}\right)$$ We refer to T as the transform gate and C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. In the original paper, the authors set $C = 1 − T$, giving: $$ \mathbf{y} = H\left(\mathbf{x},\mathbf{W\_{H}}\right)·T\left(\mathbf{x},\mathbf{W\_{T}}\right) + \mathbf{x}·\left(1-T\left(\mathbf{x},\mathbf{W\_{T}}\right)\right)$$ The authors set: $$ T\left(x\right) = \sigma\left(\mathbf{W\_{T}}^{T}\mathbf{x} + \mathbf{b\_{T}}\right) $$ Image: [Sik-Ho Tsang](https://towardsdatascience.com/review-highway-networks-gating-function-to-highway-image-classification-5a33833797b5)
Given the following machine learning model name: Generative Emotion Estimator, provide a description of the model
Given the following machine learning model name: PolarMask, provide a description of the model
**PolarMask** is an anchor-box free and single-shot instance segmentation method. Specifically, PolarMask takes an image as input and predicts the distance from a sampled positive location (ie a candidate object's center) with respect to the object's contour at each angle, and then assembles the predicted points to produce the final mask. There are several benefits to the system: (1) The polar representation unifies instance segmentation (masks) and object detection (bounding boxes) into a single framework (2) Two modules are designed (i.e. soft polar centerness and polar IoU loss) to sample high-quality center examples and optimize polar contour regression, making the performance of PolarMask does not depend on the bounding box prediction results and more efficient in training. (3) PolarMask is fully convolutional and can be embedded into most off-the-shelf detection methods.
Given the following machine learning model name: KNN and IOU based verification, provide a description of the model
**KNN and IoU-based Verification** is used to verify detections and choose between multiple detections of the same underlying object. It was originally used within the context of blood cell counting in medical images. To avoid this double counting problem, the KNN algorithm is applied in each platelet to determine its closest platelet and then using the intersection of union (IOU) between two platelets we calculate their extent of overlap. The authors allow 10% of the overlap between platelet and its closest platelet based on empirical observations. If the overlap is larger than that, they ignore that cell as a double count to get rid of spurious counting.
Given the following machine learning model name: ByteScheduler, provide a description of the model
**ByteScheduler** is a generic communication scheduler for distributed DNN training acceleration. It is based on analysis that partitioning and rearranging the tensor transmissions can result in optimal results in theory and good performance in real-world even with scheduling overhead.
Given the following machine learning model name: Transductive Inference, provide a description of the model
Given the following machine learning model name: GAN Feature Matching, provide a description of the model
**Feature Matching** is a regularizing objective for a generator in [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks) that prevents it from overtraining on the current discriminator. Instead of directly maximizing the output of the discriminator, the new objective requires the generator to generate data that matches the statistics of the real data, where we use the discriminator only to specify the statistics that we think are worth matching. Specifically, we train the generator to match the expected value of the features on an intermediate layer of the discriminator. This is a natural choice of statistics for the generator to match, since by training the discriminator we ask it to find those features that are most discriminative of real data versus data generated by the current model. Letting $\mathbf{f}\left(\mathbf{x}\right)$ denote activations on an intermediate layer of the discriminator, our new objective for the generator is defined as: $ ||\mathbb{E}\_{x\sim p\_{data} } \mathbf{f}\left(\mathbf{x}\right) − \mathbb{E}\_{\mathbf{z}∼p\_{\mathbf{z}}\left(\mathbf{z}\right)}\mathbf{f}\left(G\left(\mathbf{z}\right)\right)||^{2}\_{2} $. The discriminator, and hence $\mathbf{f}\left(\mathbf{x}\right)$, are trained as with vanilla GANs. As with regular [GAN](https://paperswithcode.com/method/gan) training, the objective has a fixed point where G exactly matches the distribution of training data.
Given the following machine learning model name: BiFPN, provide a description of the model
A **BiFPN**, or **Weighted Bi-directional Feature Pyramid Network**, is a type of feature pyramid network which allows easy and fast multi-scale feature fusion. It incorporates the multi-level feature fusion idea from [FPN](https://paperswithcode.com/method/fpn), [PANet](https://paperswithcode.com/method/panet) and [NAS-FPN](https://paperswithcode.com/method/nas-fpn) that enables information to flow in both the top-down and bottom-up directions, while using regular and efficient connections. It also utilizes a fast normalized fusion technique. Traditional approaches usually treat all features input to the FPN equally, even those with different resolutions. However, input features at different resolutions often have unequal contributions to the output features. Thus, the BiFPN adds an additional weight for each input feature allowing the network to learn the importance of each. All regular convolutions are also replaced with less expensive depthwise separable convolutions. Comparing with PANet, PANet added an extra bottom-up path for information flow at the expense of more computational cost. Whereas BiFPN optimizes these cross-scale connections by removing nodes with a single input edge, adding an extra edge from the original input to output node if they are on the same level, and treating each bidirectional path as one feature network layer (repeating it several times for more high-level future fusion).
Given the following machine learning model name: NeuroTactic, provide a description of the model
**NeuroTactic** is a model for theorem proving which leverages [graph neural networks](https://paperswithcode.com/methods/category/graph-models) to represent the theorem and premises, and applies graph contrastive learning for pre-training. Specifically, premise selection is designed as a pretext task for the graph contrastive learning approach. The learned representations are then used for the downstream task, tactic prediction
Given the following machine learning model name: ENIGMA, provide a description of the model
**ENIGMA** is an evaluation framework for dialog systems based on Pearson and Spearman's rank correlations between the estimated rewards and the true rewards. ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation, making automatic evaluations feasible. More importantly, ENIGMA is model-free and agnostic to the behavior policies for collecting the experience data (see details in Section 2), which significantly alleviates the technical difficulties of modeling complex dialogue environments and human behaviors.
Given the following machine learning model name: Stable Rank Normalization, provide a description of the model
**Stable Rank Normalization (SRN)** is a weight-normalization scheme which minimizes the stable rank of a linear operator. It simultaneously controls the Lipschitz constant and the stable rank of a linear operator. Stable rank is a softer version of the rank operator and is defined as the squared ratio of the Frobenius norm to the spectral norm.
Given the following machine learning model name: Bidirectional GAN, provide a description of the model
A **BiGAN**, or **Bidirectional GAN**, is a type of generative adversarial network where the generator not only maps latent samples to generated data, but also has an inverse mapping from data to the latent representation. The motivation is to make a type of GAN that can learn rich representations for us in applications like unsupervised learning. In addition to the generator $G$ from the standard [GAN](https://paperswithcode.com/method/gan) framework, BiGAN includes an encoder $E$ which maps data $\mathbf{x}$ to latent representations $\mathbf{z}$. The BiGAN discriminator $D$ discriminates not only in data space ($\mathbf{x}$ versus $G\left(\mathbf{z}\right)$), but jointly in data and latent space (tuples $\left(\mathbf{x}, E\left(\mathbf{x}\right)\right)$ versus $\left(G\left(z\right), z\right)$), where the latent component is either an encoder output $E\left(\mathbf{x}\right)$ or a generator input $\mathbf{z}$.
Given the following machine learning model name: Factorized Random Synthesized Attention, provide a description of the model
**Factorized Random Synthesized Attention**, introduced with the [Synthesizer](https://paperswithcode.com/method/synthesizer) architecture, is similar to [factorized dense synthesized attention](https://paperswithcode.com/method/factorized-dense-synthesized-attention) but for random synthesizers. Letting $R$ being a randomly initialized matrix, we factorize $R$ into low rank matrices $R\_{1}, R\_{2} \in \mathbb{R}^{l\text{ x}k}$ in the attention function: $$ Y = \text{Softmax}\left(R\_{1}R\_{2}^{T}\right)G\left(X\right) . $$ Here $G\left(.\right)$ is a parameterized function that is equivalent to $V$ in [Scaled Dot-Product Attention](https://paperswithcode.com/method/scaled). For each head, the factorization reduces the parameter costs from $l^{2}$ to $2\left(lk\right)$ where $k << l$ and hence helps prevent overfitting. In practice, we use a small value of $k = 8$. The basic idea of a Random Synthesizer is to not rely on pairwise token interactions or any information from individual token but rather to learn a task-specific alignment that works well globally across many samples.
Given the following machine learning model name: PAFPN, provide a description of the model
**PAFPN** is a feature pyramid module used in Path Aggregation networks ([PANet](https://paperswithcode.com/method/panet)) that combines FPNs with [bottom-up path augmentation](https://paperswithcode.com/method/bottom-up-path-augmentation), which shortens the information path between lower layers and topmost feature.
Given the following machine learning model name: Go-Explore, provide a description of the model
**Go-Explore** is a family of algorithms aiming to tackle two challenges with effective exploration in reinforcement learning: algorithms forgetting how to reach previously visited states ("detachment") and from failing to first return to a state before exploring from it ("derailment"). To avoid detachment, Go-Explore builds an archive of the different states it has visited in the environment, thus ensuring that states cannot be forgotten. Starting with an archive beginning with the initial state, the archive is built iteratively. In Go-Explore we: (a) Probabilistically select a state from the archive, preferring states associated with promising cells. (b) Return to the selected state, such as by restoring simulator state or by running a goal-conditioned policy. (c) Explore from that state by taking random actions or sampling from a trained policy. (d) Map every state encountered during returning and exploring to a low-dimensional cell representation. (e) Add states that map to new cells to the archive and update other archive entries.
Given the following machine learning model name: Prioritized Experience Replay, provide a description of the model
**Prioritized Experience Replay** is a type of [experience replay](https://paperswithcode.com/method/experience-replay) in reinforcement learning where we more frequently replay transitions with high expected learning progress, as measured by the magnitude of their temporal-difference (TD) error. This prioritization can lead to a loss of diversity, which is alleviated with stochastic prioritization, and introduce bias, which can be corrected with importance sampling. The stochastic sampling method interpolates between pure greedy prioritization and uniform random sampling. The probability of being sampled is ensured to be monotonic in a transition's priority, while guaranteeing a non-zero probability even for the lowest-priority transition. Concretely, define the probability of sampling transition $i$ as $$P(i) = \frac{p_i^{\alpha}}{\sum_k p_k^{\alpha}}$$ where $p_i > 0$ is the priority of transition $i$. The exponent $\alpha$ determines how much prioritization is used, with $\alpha=0$ corresponding to the uniform case. Prioritized replay introduces bias because it changes this distribution in an uncontrolled fashion, and therefore changes the solution that the estimates will converge to. We can correct this bias by using importance-sampling (IS) weights: $$ w\_{i} = \left(\frac{1}{N}\cdot\frac{1}{P\left(i\right)}\right)^{\beta} $$ that fully compensates for the non-uniform probabilities $P\left(i\right)$ if $\beta = 1$. These weights can be folded into the [Q-learning](https://paperswithcode.com/method/q-learning) update by using $w\_{i}\delta\_{i}$ instead of $\delta\_{i}$ - weighted IS rather than ordinary IS. For stability reasons, we always normalize weights by $1/\max\_{i}w\_{i}$ so that they only scale the update downwards. The two types of prioritization are proportional based, where $p\_{i} = |\delta\_{i}| + \epsilon$ and rank-based, where $p\_{i} = \frac{1}{\text{rank}\left(i\right)}$, the latter where $\text{rank}\left(i\right)$ is the rank of transition $i$ when the replay memory is sorted according to |$\delta\_{i}$|, For proportional based, hyperparameters used were $\alpha = 0.7$, $\beta\_{0} = 0.5$. For the rank-based variant, hyperparameters used were $\alpha = 0.6$, $\beta\_{0} = 0.4$.
Given the following machine learning model name: Siamese Network, provide a description of the model
A **Siamese Network** consists of twin networks which accept distinct inputs but are joined by an energy function at the top. This function computes a metric between the highest level feature representation on each side. The parameters between the twin networks are tied. [Weight tying](https://paperswithcode.com/method/weight-tying) guarantees that two extremely similar images are not mapped by each network to very different locations in feature space because each network computes the same function. The network is symmetric, so that whenever we present two distinct images to the twin networks, the top conjoining layer will compute the same metric as if we were to we present the same two images but to the opposite twins. Intuitively instead of trying to classify inputs, a siamese network learns to differentiate between inputs, learning their similarity. The loss function used is usually a form of contrastive loss. Source: [Koch et al](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf)
Given the following machine learning model name: Deep Residual Pansharpening Neural Network, provide a description of the model
In the field of fusing multi-spectral and panchromatic images (Pan-sharpening), the impressive effectiveness of deep neural networks has been recently employed to overcome the drawbacks of traditional linear models and boost the fusing accuracy. However, to the best of our knowledge, existing research works are mainly based on simple and flat networks with relatively shallow architecture, which severely limited their performances. In this paper, the concept of residual learning has been introduced to form a very deep convolutional neural network to make a full use of the high non-linearity of deep learning models. By both quantitative and visual assessments on a large number of high quality multi-spectral images from various sources, it has been supported that our proposed model is superior to all mainstream algorithms included in the comparison, and achieved the highest spatial-spectral unified accuracy.
Given the following machine learning model name: PipeDream-2BW, provide a description of the model
**PipeDream-2BW** is an asynchronous pipeline parallel method that supports memory-efficient pipeline parallelism, a hybrid form of parallelism that combines data and model parallelism with input pipelining. PipeDream-2BW uses a novel pipelining and weight gradient coalescing strategy, combined with the double buffering of weights, to ensure high throughput, low memory footprint, and weight update semantics similar to data parallelism. In addition, PipeDream2BW automatically partitions the model over the available hardware resources, while respecting hardware constraints such as memory capacities of accelerators, and topologies and bandwidths of interconnects. PipeDream-2BW also determines when to employ existing memory-savings techniques, such as activation recomputation, that trade off extra computation for lower memory footprint. The two main features are a double-buffered weight update (2BW) and flush mechanisms ensure high throughput. PipeDream-2BW splits models into stages over multiple workers, and each stage is replicated an equal number of times (with data-parallel updates across replicas of the same stage). Such parallel pipelines work well for models where each layer is repeated a fixed number of times (e.g., [transformer](https://paperswithcode.com/method/transformer) models).
Given the following machine learning model name: XLSR, provide a description of the model
**XLSR** is a multilingual speech recognition model built on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. A shared quantization module over feature encoder representations produces multilingual quantized speech units whose embeddings are then used as targets for a [Transformer](https://paperswithcode.com/method/transformer) trained by contrastive learning. The model learns to share discrete tokens across languages, creating bridges across languages.
Given the following machine learning model name: Prescribed Generative Adversarial Network, provide a description of the model
**Prescribed GANs** add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data distribution. Fitting PresGANs involves computing the intractable gradients of the [entropy regularization](https://paperswithcode.com/method/entropy-regularization) term; PresGANs sidestep this intractability using unbiased stochastic estimates.
Given the following machine learning model name: Closed-loop Weighted Empirical Risk Minimization, provide a description of the model
A closed-loop evaluation procedure is first used in a simulator to identify training data samples that are important for practical driving performance and then we these samples to help debias the policy network.
Given the following machine learning model name: Feature Pyramid Network, provide a description of the model
A **Feature Pyramid Network**, or **FPN**, is a feature extractor that takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps at multiple levels, in a fully convolutional fashion. This process is independent of the backbone convolutional architectures. It therefore acts as a generic solution for building feature pyramids inside deep convolutional networks to be used in tasks like object detection. The construction of the pyramid involves a bottom-up pathway and a top-down pathway. The bottom-up pathway is the feedforward computation of the backbone ConvNet, which computes a feature hierarchy consisting of feature maps at several scales with a scaling step of 2. For the feature pyramid, one pyramid level is defined for each stage. The output of the last layer of each stage is used as a reference set of feature maps. For [ResNets](https://paperswithcode.com/method/resnet) we use the feature activations output by each stage’s last [residual block](https://paperswithcode.com/method/residual-block). The top-down pathway hallucinates higher resolution features by upsampling spatially coarser, but semantically stronger, feature maps from higher pyramid levels. These features are then enhanced with features from the bottom-up pathway via lateral connections. Each lateral connection merges feature maps of the same spatial size from the bottom-up pathway and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are more accurately localized as it was subsampled fewer times.
Given the following machine learning model name: Accumulating Eligibility Trace, provide a description of the model
An **Accumulating Eligibility Trace** is a type of [eligibility trace](https://paperswithcode.com/method/eligibility-trace) where the trace increments in an accumulative way. For the memory vector $\textbf{e}\_{t} \in \mathbb{R}^{b} \geq \textbf{0}$: $$\mathbf{e\_{0}} = \textbf{0}$$ $$\textbf{e}\_{t} = \nabla{\hat{v}}\left(S\_{t}, \mathbf{\theta}\_{t}\right) + \gamma\lambda\textbf{e}\_{t}$$
Given the following machine learning model name: classifier-guidance, provide a description of the model
Given the following machine learning model name: mBART, provide a description of the model
**mBART** is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the [BART objective](https://paperswithcode.com/method/bart). The input texts are noised by masking phrases and permuting sentences, and a single [Transformer model](https://paperswithcode.com/method/transformer) is learned to recover the texts. Different from other pre-training approaches for machine translation, mBART pre-trains a complete autoregressive [Seq2Seq](https://paperswithcode.com/method/seq2seq) model. mBART is trained once for all languages, providing a set of parameters that can be fine-tuned for any of the language pairs in both supervised and unsupervised settings, without any task-specific or language-specific modifications or initialization schemes.
Given the following machine learning model name: SC-GPT, provide a description of the model
**SC-GPT** is a multi-layer [Transformer](http://paperswithcode.com/method/transformer) neural language model, trained in three steps: (i) Pre-trained on plain text, similar to [GPT-2](http://paperswithcode.com/method/gpt-2); (ii) Continuously pretrained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; (iii) Fine-tuned for a target domain using very limited amounts of domain labels. Unlike [GPT-2](http://paperswithcode.com/method/gpt-2), SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-[LSTM](https://paperswithcode.com/method/lstm) but requiring much less domain labels to generalize to new domains. It is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability, and fine-tuned with only a few domain-specific labels to adapt to new domains.
Given the following machine learning model name: HyperNetwork, provide a description of the model
A **HyperNetwork** is a network that generates weights for a main network. The behavior of the main network is the same with any usual neural network: it learns to map some raw inputs to their desired targets; whereas the hypernetwork takes a set of inputs that contain information about the structure of the weights and generates the weight for that layer.
Given the following machine learning model name: Proximity Regularization, provide a description of the model
Given the following machine learning model name: LR-Net, provide a description of the model
An **LR-Net** is a type of non-convolutional neural network that utilises local relation layers instead of convolutions for image feature extraction. Otherwise, the architecture follows the same design as a [ResNet](https://paperswithcode.com/method/resnet).
Given the following machine learning model name: Contextual Graph Markov Model, provide a description of the model
Contextual Graph Markov Model (CGMM) is an approach combining ideas from generative models and neural networks for the processing of graph data. It founds on a constructive methodology to build a deep architecture comprising layers of probabilistic models that learn to encode the structured information in an incremental fashion. Context is diffused in an efficient and scalable way across the graph vertexes and edges. The resulting graph encoding is used in combination with discriminative models to address structure classification benchmarks. Description and image from: [Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing](https://arxiv.org/pdf/1805.10636.pdf)
Given the following machine learning model name: Elastic Dense Block, provide a description of the model
**Elastic Dense Block** is a skip connection block that modifies the [Dense Block](https://paperswithcode.com/method/dense-block) with downsamplings and upsamplings in parallel branches at each layer to let the network learn from a data scaling policy in which inputs are processed at different resolutions in each layer. It is called "elastic" because each layer in the network is flexible in terms of choosing the best scale by a soft policy.
Given the following machine learning model name: Internet Explorer, provide a description of the model
Internet Explorer explores the web in a self-supervised manner to progressively find relevant examples that improve performance on a desired target dataset. It cycles between searching for images on the Internet with text queries, self-supervised training on downloaded images, determining which images were useful, and prioritizing what to search for next.
Given the following machine learning model name: WordPiece, provide a description of the model
**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is: 1. Initialize the word unit inventory with all the characters in the text. 2. Build a language model on the training data using the inventory from 1. 3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model. 4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold. Text: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944) Image: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)
Given the following machine learning model name: Wavelet Distributed Training, provide a description of the model
**Wavelet** is an asynchronous data parallel approach that interleaves waves of training tasks on the same group of GPUs, such that tasks belong to one wave can leverage on-device memory from tasks in another wave during their memory valley period, thus boost-up the training throughput. As shown in the Figure, Wavelet divides dataparallel training tasks into two waves, namely tick-wave and tock-wave. The task launching offset is achieved by delaying the launch time of tock-wave tasks for half of a whole forward-backward training cycle. Therefore, the tock-wave tasks can directly leverage GPU memory valley period of tick-wave tasks (e.g. 0.4s-0.6s in Figure 2(a)), since backward propagation of tick-wave tasks is compute-heavy but memory is often unused. Similarly, tick-wave tasks can leverage memory valley period of tock-wave tasks in the same way.
Given the following machine learning model name: Transformer in Transformer, provide a description of the model
[Transformer](https://paperswithcode.com/method/transformer) is a type of self-attention-based neural networks originally applied for NLP tasks. Recently, pure transformer-based models are proposed to solve computer vision problems. These visual transformers usually view an image as a sequence of patches while they ignore the intrinsic structure information inside each patch. In this paper, we propose a novel Transformer-iN-Transformer (TNT) model for modeling both patch-level and pixel-level representation. In each TNT block, an outer transformer block is utilized to process patch embeddings, and an inner transformer block extracts local features from pixel embeddings. The pixel-level feature is projected to the space of patch embedding by a linear transformation layer and then added into the patch. By stacking the TNT blocks, we build the TNT model for image recognition. Image source: [Han et al.](https://arxiv.org/pdf/2103.00112v1.pdf)
Given the following machine learning model name: ConvBERT, provide a description of the model
**ConvBERT** is a modification on the [BERT](https://paperswithcode.com/method/bert) architecture which uses a [span-based dynamic convolution](https://paperswithcode.com/method/span-based-dynamic-convolution) to replace self-attention heads to directly model local dependencies. Specifically a new [mixed attention module](https://paperswithcode.com/method/mixed-attention-block) replaces the [self-attention modules](https://paperswithcode.com/method/scaled) in BERT, which leverages the advantages of [convolution](https://paperswithcode.com/method/convolution) to better capture local dependency. Additionally, a new span-based dynamic convolution operation is used to utilize multiple input tokens to dynamically generate the convolution kernel. Lastly, ConvBERT also incorporates some new model designs including the bottleneck attention and grouped linear operator for the feed-forward module (reducing the number of parameters).
Given the following machine learning model name: EdgeFlow, provide a description of the model
**EdgeFlow** is an interactive segmentation architecture that fully utilizes interactive information of user clicks with edge-guided flow. Edge guidance is the idea that interactive segmentation improves segmentation masks progressively with user clicks. Based on user clicks, an edge mask scheme is used, which takes the object edges estimated from the previous iteration as prior information, instead of direct mask estimation (if the previous mask is used as input, poor segmentation results could result). The architecture consists of a coarse-to-fine network including CoarseNet and FineNet. For CoarseNet, [HRNet](https://paperswithcode.com/method/hrnet)-18+OCR is utilized as the base segmentation model and the edge-guided flow is appended to deal with interactive information. For FineNet, three [atrous convolution](https://paperswithcode.com/method/dilated-convolution) blocks are utilized to refine the coarse masks.
Given the following machine learning model name: Efficient Channel Attention, provide a description of the model
**Efficient Channel Attention** is an architectural unit based on [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) blocks that reduces model complexity without dimensionality reduction. It was proposed as part of the [ECA-Net](https://paperswithcode.com/method/eca-net) CNN architecture. After channel-wise [global average pooling](https://paperswithcode.com/method/global-average-pooling) without dimensionality reduction, the ECA captures local cross-channel interaction by considering every channel and its $k$ neighbors. The ECA can be efficiently implemented by fast $1D$ [convolution](https://paperswithcode.com/method/convolution) of size $k$, where kernel size $k$ represents the coverage of local cross-channel interaction, i.e., how many neighbors participate in attention prediction of one channel.
Given the following machine learning model name: Invertible NxN Convolution, provide a description of the model
Given the following machine learning model name: Approximate Bayesian Computation, provide a description of the model
Class of methods in Bayesian Statistics where the posterior distribution is approximated over a rejection scheme on simulations because the likelihood function is intractable. Different parameters get sampled and simulated. Then a distance function is calculated to measure the quality of the simulation compared to data from real observations. Only simulations that fall below a certain threshold get accepted. Image source: [Kulkarni et al.](https://www.umass.edu/nanofabrics/sites/default/files/PDF_0.pdf)
Given the following machine learning model name: Positional Encoding Generator, provide a description of the model
**Positional Encoding Generator**, or **PEG**, is a module used in the [Conditional Position Encoding](https://paperswithcode.com/method/conditional-positional-encoding) position embeddings. It dynamically produce the positional encodings conditioned on the local neighborhood of an input token. To condition on the local neighbors, we first reshape the flattened input sequence $X \in \mathbb{R}^{B \times N \times C}$ of DeiT back to $X^{\prime} \in \mathbb{R}^{B \times H \times W \times C}$ in the 2 -D image space. Then, a function (denoted by $\mathcal{F}$ in the Figure) is repeatedly applied to the local patch in $X^{\prime}$ to produce the conditional positional encodings $E^{B \times H \times W \times C} .$ PEG can be efficiently implemented with a 2-D convolution with kernel $k(k \geq 3)$ and $\frac{k-1}{2}$ zero paddings. Note that the zero paddings here are important to make the model be aware of the absolute positions, and $\mathcal{F}$ can be of various forms such as separable convolutions and many others.
Given the following machine learning model name: Hierarchical Softmax, provide a description of the model
**Hierarchical Softmax** is a is an alternative to [softmax](https://paperswithcode.com/method/softmax) that is faster to evaluate: it is $O\left(\log{n}\right)$ time to evaluate compared to $O\left(n\right)$ for softmax. It utilises a multi-layer binary tree, where the probability of a word is calculated through the product of probabilities on each edge on the path to that node. See the Figure to the right for an example of where the product calculation would occur for the word "I'm". (Introduced by Morin and Bengio) Image Credit: [Steven Schmatz](https://www.quora.com/profile/Steven-Schmatz)
Given the following machine learning model name: Differentiable Neural Architecture Search, provide a description of the model
**DNAS**, or **Differentiable Neural Architecture Search**, uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. DNAS allows us to explore a layer-wise search space where we can choose a different block for each layer of the network. DNAS represents the search space by a super net whose operators execute stochastically. It relaxes the problem of finding the optimal architecture to find a distribution that yields the optimal architecture. By using the [Gumbel Softmax](https://paperswithcode.com/method/gumbel-softmax) technique, it is possible to directly train the architecture distribution using gradient-based optimization such as [SGD](https://paperswithcode.com/method/sgd). The loss used to train the stochastic super net consists of both the cross-entropy loss that leads to better accuracy and the latency loss that penalizes the network's latency on a target device. To estimate the latency of an architecture, the latency of each operator in the search space is measured and a lookup table model is used to compute the overall latency by adding up the latency of each operator. Using this model allows for estimation of the latency of architectures in an enormous search space. More importantly, it makes the latency differentiable with respect to layer-wise block choices.
Given the following machine learning model name: Animatable Reconstruction of Clothed Humans, provide a description of the model
**Animatable Reconstruction of Clothed Humans** is an end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image. ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image. A Semantic Space and a Semantic Deformation Field are created using a parametric 3D body estimator. They allow the transformation of 2D/3D clothed humans into a canonical space, reducing ambiguities in geometry caused by pose variations and occlusions in training data. Detailed surface geometry and appearance are learned using an implicit function representation with spatial local features.
Given the following machine learning model name: VQ-VAE-2, provide a description of the model
**VQ-VAE-2** is a type of variational autoencoder that combines a a two-level hierarchical VQ-[VAE](https://paperswithcode.com/method/vae) with a self-attention autoregressive model ([PixelCNN](https://paperswithcode.com/method/pixelcnn)) as a prior. The encoder and decoder architectures are kept simple and light-weight as in the original [VQ-VAE](https://paperswithcode.com/method/vq-vae), with the only difference that hierarchical multi-scale latent maps are used for increased resolution.
Given the following machine learning model name: Levenshtein Transformer, provide a description of the model
The **Levenshtein Transformer** (LevT) is a type of [transformer](https://paperswithcode.com/method/transformer) that aims to address the lack of flexibility of previous decoding models. Notably, in previous frameworks, the length of generated sequences is either fixed or monotonically increased as the decoding proceeds. The authors argue this is incompatible with human-level intelligence where humans can revise, replace, revoke or delete any part of their generated text. Hence, LevT is proposed to bridge this gap by breaking the in-so-far standardized decoding mechanism and replacing it with two basic operations — insertion and deletion. LevT is trained using imitation learning. The resulted model contains two policies and they are executed in an alternate manner. The authors argue that with this model decoding becomes more flexible. For example, when the decoder is given an empty token, it falls back to a normal sequence generation model. On the other hand, the decoder acts as a refinement model when the initial state is a low-quality generated sequence. One crucial component in LevT framework is the learning algorithm. The authors leverage the characteristics of insertion and deletion — they are complementary but also adversarial. The algorithm they propose is called “dual policy learning”. The idea is that when training one policy (insertion or deletion), we use the output from its adversary at the previous iteration as input. An expert policy, on the other hand, is drawn to provide a correction signal.
Given the following machine learning model name: Contour Stochastic Gradient Langevin Dynamics, provide a description of the model
Simulations of multi-modal distributions can be very costly and often lead to unreliable predictions. To accelerate the computations, we propose to sample from a flattened distribution to accelerate the computations and estimate the importance weights between the original distribution and the flattened distribution to ensure the correctness of the distribution.
Given the following machine learning model name: Voxel R-CNN, provide a description of the model
**Voxel R-CNN** is a voxel-based two stage framework for 3D object detection. It consists of a 3D backbone network, a 2D bird-eye-view (BEV) Region Proposal Network and a detect head. Voxel RoI Pooling is devised to extract RoI features directly from raw features for further refinement. End-to-end, the point clouds are first divided into regular voxels and fed into the 3D backbone network for feature extraction. Then, the 3D feature volumes are converted into BEV representation, on which the 2D backbone and [RPN](https://paperswithcode.com/method/rpn) are applied for region proposal generation. Subsequently, [Voxel RoI Pooling](https://paperswithcode.com/method/voxel-roi-pooling) directly extracts RoI features from the 3D feature volumes. Finally the RoI features are exploited in the detect head for further box refinement.
Given the following machine learning model name: Adaptive Richard's Curve Weighted Activation, provide a description of the model
This work introduces a novel activation unit that can be efficiently employed in deep neural nets (DNNs) and performs significantly better than the traditional Rectified Linear Units ([ReLU](https://paperswithcode.com/method/relu)). The function developed is a two parameter version of the specialized Richard's Curve and we call it Adaptive Richard's Curve weighted Activation (ARiA). This function is non-monotonous, analogous to the newly introduced [Swish](https://paperswithcode.com/method/swish), however allows a precise control over its non-monotonous convexity by varying the hyper-parameters. We first demonstrate the mathematical significance of the two parameter ARiA followed by its application to benchmark problems such as MNIST, CIFAR-10 and CIFAR-100, where we compare the performance with ReLU and Swish units. Our results illustrate a significantly superior performance on all these datasets, making ARiA a potential replacement for ReLU and other activations in DNNs.
Given the following machine learning model name: FeatureNMS, provide a description of the model
**Feature Non-Maximum Suppression**, or **FeatureNMS**, is a post-processing step for object detection models that removes duplicates where there are multiple detections outputted per object. FeatureNMS recognizes duplicates not only based on the intersection over union between the bounding boxes, but also based on the difference of feature vectors. These feature vectors can encode more information like visual appearance.
Given the following machine learning model name: Octave Convolution, provide a description of the model
An **Octave Convolution (OctConv)** stores and process feature maps that vary spatially “slower” at a lower spatial resolution reducing both memory and computation cost. It takes in feature maps containing tensors of two frequencies one octave apart, and extracts information directly from the low-frequency maps without the need of decoding it back to the high-frequency. The motivation is that in natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures.
Given the following machine learning model name: Fishr, provide a description of the model
**Fishr** is a learning scheme to enforce domain invariance in the space of the gradients of the loss function: specifically, it introduces a regularization term that matches the domain-level variances of gradients across training domains. Critically, the strategy exhibits close relations with the Fisher Information and the Hessian of the loss. Forcing domain-level gradient covariances to be similar during the learning procedure eventually aligns the domain-level loss landscapes locally around the final weights.
Given the following machine learning model name: Context Enhancement Module, provide a description of the model
**Context Enhancement Module (CEM)** is a feature extraction module used in object detection (specifically, [ThunderNet](https://paperswithcode.com/method/thundernet)) which aims to to enlarge the receptive field. The key idea of CEM is to aggregate multi-scale local context information and global context information to generate more discriminative features. In CEM, the feature maps from three scales are merged: $C\_{4}$, $C\_{5}$ and $C\_{glb}$. $C\_{glb}$ is the global context feature vector by applying a [global average pooling](https://paperswithcode.com/method/global-average-pooling) on $C\_{5}$. We then apply a 1 × 1 [convolution](https://paperswithcode.com/method/convolution) on each feature map to squeeze the number of channels to $\alpha \times p \times p = 245$. Afterwards, $C\_{5}$ is upsampled by 2× and $C\_{glb}$ is broadcast so that the spatial dimensions of the three feature maps are equal. At last, the three generated feature maps are aggregated. By leveraging both local and global context, CEM effectively enlarges the receptive field and refines the representation ability of the thin feature map. Compared with prior [FPN](https://paperswithcode.com/method/fpn) structures, CEM involves only two 1×1 convolutions and a fc layer.
Given the following machine learning model name: Rectified Linear Units, provide a description of the model
**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero. $$ f\left(x\right) = \max\left(0, x\right) $$
Given the following machine learning model name: DU-GAN, provide a description of the model
**DU-GAN** is a [generative adversarial network](https://www.paperswithcode.com/methods/category/generative-adversarial-networks) for LDCT denoising in medical imaging. The generator produces denoised LDCT images, and two independent branches with [U-Net](https://paperswithcode.com/method/u-net) based discriminators perform at the image and gradient domains. The U-Net based discriminator provides both global structure and local per-pixel feedback to the generator. Furthermore, the image discriminator encourages the generator to produce photo-realistic CT images while the gradient discriminator is utilized for better edge and alleviating streak artifacts caused by photon starvation.
Given the following machine learning model name: Deformable Convolution, provide a description of the model
**Deformable convolutions** add 2D offsets to the regular grid sampling locations in the standard [convolution](https://paperswithcode.com/method/convolution). It enables free form deformation of the sampling grid. The offsets are learned from the preceding feature maps, via additional convolutional layers. Thus, the deformation is conditioned on the input features in a local, dense, and adaptive manner.
Given the following machine learning model name: PSANet, provide a description of the model
**PSANet** is a semantic segmentation architecture that utilizes a [Point-wise Spatial Attention](https://paperswithcode.com/method/point-wise-spatial-attention) (PSA) module to aggregate long-range contextual information in a flexible and adaptive manner. Each position in the feature map is connected with all other ones through self-adaptively predicted attention maps, thus harvesting various information nearby and far away. Furthermore, the authors design the bi-directional information propagation path for a comprehensive understanding of complex scenes. Each position collects information from all others to help the prediction of itself and vice versa, the information at each position can be distributed globally, assisting the prediction of all other positions. Finally, the bi-directionally aggregated contextual information is fused with local features to form the final representation of complex scenes. The authors use [ResNet](https://paperswithcode.com/method/resnet) as an [FCN](https://paperswithcode.com/method/fcn) backbone for PSANet, as the Figure to the right illustrates. The proposed PSA module is then used to aggregate long-range contextual information from the local representation. It follows stage-5 in ResNet, which is the final stage of the FCN backbone. Features in stage-5 are semantically stronger. Aggregating them together leads to a more comprehensive representation of long-range context. Moreover, the spatial size of the feature map at stage-5 is smaller and can reduce computation overhead and memory consumption. An auxiliary loss branch is applied apart from the main loss.
Given the following machine learning model name: Longformer, provide a description of the model
**Longformer** is a modified [Transformer](https://paperswithcode.com/method/transformer) architecture. Traditional [Transformer-based models](https://paperswithcode.com/methods/category/transformers) are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this, **Longformer** uses an attention pattern that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. The attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. The attention patterns utilised include: [sliding window attention](https://paperswithcode.com/method/sliding-window-attention), [dilated sliding window attention](https://paperswithcode.com/method/dilated-sliding-window-attention) and global + sliding window. These can be viewed in the components section of this page.
Given the following machine learning model name: Stein Variational Policy Gradient, provide a description of the model
**Stein Variational Policy Gradient**, or **SVPG**, is a policy gradient based method in reinforcement learning that uses Stein Variational Gradient Descent to allow simultaneous exploitation and exploration of multiple policies. Unlike traditional policy optimization which attempts to learn a single policy, SVPG models a distribution of policy parameters, where samples from this distribution will represent strong policies. SVPG optimizes this distribution of policy parameters with (relative) [entropy regularization](https://paperswithcode.com/method/entropy-regularization). The (relative) entropy term explicitly encourages exploration in the parameter space while also optimizing the expected utility of polices drawn from this distribution. Stein variational gradient descent (SVGD) is then used to optimize this distribution. SVGD leverages efficient deterministic dynamics to transport a set of particles to approximate given target posterior distributions. The update takes the form: $$ $$ $$ \nabla\theta\_i = \frac{1} {n}\sum\_{j=1}^n \nabla\_{\theta\_{j}} \left(\frac{1}{\alpha} J(\theta\_{j}) + \log q\_0(\theta\_j)\right)k(\theta\_j, \theta\_i) + \nabla\_{\theta\_j} k(\theta\_j, \theta\_i)$$ Note that here the magnitude of $\alpha$ adjusts the relative importance between the policy gradient and the prior term $\nabla_{\theta_j} \left(\frac{1}{\alpha} J(\theta_j) + \log q_0(\theta_j)\right)k(\theta_j, \theta_i)$ and the repulsive term $\nabla_{\theta_j} k(\theta_j, \theta_i)$. The repulsive functional is used to diversify particles to enable parameter exploration. A suitable $\alpha$ provides a good trade-off between exploitation and exploration. If $\alpha$ is too large, the Stein gradient would only drive the particles to be consistent with the prior $q_0$. As $\alpha \to 0$, this algorithm is reduced to running $n$ copies of independent policy gradient algorithms, if $\{\theta_i\}$ are initialized very differently. A careful annealing scheme of $\alpha$ allows efficient exploration in the beginning of training and later focuses on exploitation towards the end of training.
Given the following machine learning model name: AdaBound, provide a description of the model
**AdaBound** is a variant of the [Adam](https://paperswithcode.com/method/adabound) stochastic optimizer which is designed to be more robust to extreme learning rates. Dynamic bounds are employed on learning rates, where the lower and upper bound are initialized as zero and infinity respectively, and they both smoothly converge to a constant final step size. AdaBound can be regarded as an adaptive method at the beginning of training, and thereafter it gradually and smoothly transforms to [SGD](https://paperswithcode.com/method/sgd) (or with momentum) as the time step increases. $$ g\_{t} = \nabla{f}\_{t}\left(x\_{t}\right) $$ $$ m\_{t} = \beta\_{1t}m\_{t-1} + \left(1-\beta\_{1t}\right)g\_{t} $$ $$ v\_{t} = \beta\_{2}v\_{t-1} + \left(1-\beta\_{2}\right)g\_{t}^{2} \text{ and } V\_{t} = \text{diag}\left(v\_{t}\right) $$ $$ \hat{\eta}\_{t} = \text{Clip}\left(\alpha/\sqrt{V\_{t}}, \eta\_{l}\left(t\right), \eta\_{u}\left(t\right)\right) \text{ and } \eta\_{t} = \hat{\eta}\_{t}/\sqrt{t} $$ $$ x\_{t+1} = \Pi\_{\mathcal{F}, \text{diag}\left(\eta\_{t}^{-1}\right)}\left(x\_{t} - \eta\_{t} \odot m\_{t} \right) $$ Where $\alpha$ is the initial step size, and $\eta_{l}$ and $\eta_{u}$ are the lower and upper bound functions respectively.
Given the following machine learning model name: Uncertainty Class Activation Map (U-CAM) Using Gradient Certainty Method, provide a description of the model
Understanding and explaining deep learning models is an imperative task. Towards this, we propose a method that obtains gradient-based certainty estimates that also provide [visual attention](https://paperswithcode.com/method/visual-attention) maps. Particularly, we solve for visual question answering task. We incorporate modern probabilistic deep learning methods that we further improve by using the gradients for these estimates. These have two-fold benefits: a) improvement in obtaining the certainty estimates that correlate better with misclassified samples and b) improved attention maps that provide state-of-the-art results in terms of correlation with human attention regions. The improved attention maps result in consistent improvement for various methods for visual question answering. Therefore, the proposed technique can be thought of as a tool for obtaining improved certainty estimates and explanations for deep learning models.
Given the following machine learning model name: Twin Delayed Deep Deterministic, provide a description of the model
**TD3** builds on the [DDPG](https://paperswithcode.com/method/ddpg) algorithm for reinforcement learning, with a couple of modifications aimed at tackling overestimation bias with the value function. In particular, it utilises [clipped double Q-learning](https://paperswithcode.com/method/clipped-double-q-learning), delayed update of target and policy networks, and [target policy smoothing](https://paperswithcode.com/method/target-policy-smoothing) (which is similar to a [SARSA](https://paperswithcode.com/method/sarsa) based update; a safer update, as they provide higher value to actions resistant to perturbations).
Given the following machine learning model name: Feature Fusion Module v2, provide a description of the model
**Feature Fusion Module v2** is a feature fusion module from the [M2Det](https://paperswithcode.com/method/m2det) object detection model, and is crucial for constructing the final multi-level feature pyramid. They use [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) layers to compress the channels of the input features and use a concatenation operation to aggregate these feature map. FFMv2 takes the base feature and the largest output feature map of the previous [Thinned U-Shape Module](https://paperswithcode.com/method/tum) (TUM) – these two are of the same scale – as input, and produces the fused feature for the next TUM.
Given the following machine learning model name: Graph Network-based Simulators, provide a description of the model
**Graph Network-Based Simulators** is a type of graph neural network that represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing.
Given the following machine learning model name: Confidence Intervals for Diffusion Models, provide a description of the model
Given a corrupted input image, Con\textit{ffusion}, repurposes a pretrained diffusion model to generate lower and upper bounds around each reconstructed pixel. The true pixel value is guaranteed to fall within these bounds with probability $p$.
Given the following machine learning model name: StreaMRAK, provide a description of the model
**StreaMRAK** is a streaming version of kernel ridge regression. It divdes the problem into several levels of resolution, which allows continual refinement to the predictions.
Given the following machine learning model name: Sparse Switchable Normalization, provide a description of the model
**Sparse Switchable Normalization (SSN)** is a variant on [Switchable Normalization](https://paperswithcode.com/method/switchable-normalization) where the importance ratios are constrained to be sparse. Unlike $\ell_1$ and $\ell_0$ constraints that impose difficulties in optimization, the constrained optimization problem is turned into feed-forward computation through [SparseMax](https://paperswithcode.com/method/sparsemax), which is a sparse version of [softmax](https://paperswithcode.com/method/softmax).
Given the following machine learning model name: YOLOP, provide a description of the model
**YOLOP** is a panoptic driving perception network for handling traffic object detection, drivable area segmentation and lane detection simultaneously. It is composed of one encoder for feature extraction and three decoders to handle the specific tasks. It can be thought of a lightweight version of Tesla's HydraNet model for self-driving cars. A lightweight CNN, from Scaled-yolov4, is used as the encoder to extract features from the image. Then these feature maps are fed to three decoders to complete their respective tasks. The detection decoder is based on the current best-performing single-stage detection network, [YOLOv4](https://paperswithcode.com/method/yolov4), for two main reasons: (1) The single-stage detection network is faster than the two-stage detection network. (2) The grid-based prediction mechanism of the single-stage detector is more related to the other two semantic segmentation tasks, while instance segmentation is usually combined with the region based detector as in [Mask R-CNN](https://paperswithcode.com/method/mask-r-cnn). The feature map output by the encoder incorporates semantic features of different levels and scales, and our segmentation branch can use these feature maps to complete pixel-wise semantic prediction.
Given the following machine learning model name: Pointer Network, provide a description of the model
**Pointer Networks** tackle problems where input and output data are sequential data, but can't be solved by seq2seq type models because discrete categories of output elements depend on the variable input size (and are not decided in advance). A Pointer Network learns the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. They solve the problem of variable size output dictionaries using [additive attention](https://paperswithcode.com/method/additive-attention). But instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, Pointer Networks use attention as a pointer to select a member of the input sequence as the output. Pointer-Nets can be used to learn approximate solutions to challenging geometric problems such as finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem.
Given the following machine learning model name: NADAM, provide a description of the model
**NADAM**, or **Nesterov-accelerated Adaptive Moment Estimation**, combines [Adam](https://paperswithcode.com/method/adam) and [Nesterov Momentum](https://paperswithcode.com/method/nesterov-accelerated-gradient). The update rule is of the form: $$ \theta\_{t+1} = \theta\_{t} - \frac{\eta}{\sqrt{\hat{v}\_{t}}+\epsilon}\left(\beta\_{1}\hat{m}\_{t} + \frac{(1-\beta\_{t})g\_{t}}{1-\beta^{t}\_{1}}\right)$$ Image Source: [Incorporating Nesterov Momentum into Adam](http://cs229.stanford.edu/proj2015/054_report.pdf)
Given the following machine learning model name: ProxylessNet-GPU, provide a description of the model
**ProxylessNet-GPU** is a convolutional neural network architecture learnt with the [ProxylessNAS](https://paperswithcode.com/method/proxylessnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) algorithm that is optimized for GPU devices. It uses inverted residual blocks (MBConvs) from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2) as its basic building block.
Given the following machine learning model name: LOGAN, provide a description of the model
**LOGAN** is a generative adversarial network that uses a latent optimization approach using [natural gradient descent](https://paperswithcode.com/method/natural-gradient-descent) (NGD). For the Fisher matrix in NGD, the authors use the empirical Fisher $F'$ with Tikhonov damping: $$ F' = g \cdot g^{T} + \beta{I} $$ They also use Euclidian Norm regularization for the optimization step. For LOGAN's base architecture, [BigGAN-deep](https://paperswithcode.com/method/biggan-deep) is used with a few modifications: increasing the size of the latent source from $186$ to $256$, to compensate the randomness of the source lost when optimising $z$. 2, using the uniform distribution $U\left(−1, 1\right)$ instead of the standard normal distribution $N\left(0, 1\right)$ for $p\left(z\right)$ to be consistent with the clipping operation, using leaky [ReLU](https://paperswithcode.com/method/relu) (with the slope of 0.2 for the negative part) instead of ReLU as the non-linearity for smoother gradient flow for $\frac{\delta{f}\left(z\right)}{\delta{z}}$ .
Given the following machine learning model name: Feature Information Entropy Regularized Cross Entropy, provide a description of the model
FIERCE is an entropic regularization on the **feature** space
Given the following machine learning model name: GLM, provide a description of the model
**GLM** is a bilingual (English and Chinese) pre-trained transformer-based language model that follow the traditional architecture of decoder-only autoregressive language modeling. It leverages autoregressive blank infilling as its training objective.
Given the following machine learning model name: Class Attention, provide a description of the model
A **Class Attention** layer, or **CA Layer**, is an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) for [vision transformers](https://paperswithcode.com/methods/category/vision-transformer) used in [CaiT](https://paperswithcode.com/method/cait) that aims to extract information from a set of processed patches. It is identical to a [self-attention layer](https://paperswithcode.com/method/scaled), except that it relies on the attention between (i) the class embedding $x_{\text {class }}$ (initialized at CLS in the first CA) and (ii) itself plus the set of frozen patch embeddings $x_{\text {patches }} .$ Considering a network with $h$ heads and $p$ patches, and denoting by $d$ the embedding size, the multi-head class-attention is parameterized with several projection matrices, $W_{q}, W_{k}, W_{v}, W_{o} \in \mathbf{R}^{d \times d}$, and the corresponding biases $b_{q}, b_{k}, b_{v}, b_{o} \in \mathbf{R}^{d} .$ With this notation, the computation of the CA residual block proceeds as follows. We first augment the patch embeddings (in matrix form) as $z=\left[x_{\text {class }}, x_{\text {patches }}\right]$. We then perform the projections: $$Q=W\_{q} x\_{\text {class }}+b\_{q}$$ $$K=W\_{k} z+b\_{k}$$ $$V=W\_{v} z+b\_{v}$$ The class-attention weights are given by $$ A=\operatorname{Softmax}\left(Q . K^{T} / \sqrt{d / h}\right) $$ where $Q . K^{T} \in \mathbf{R}^{h \times 1 \times p}$. This attention is involved in the weighted sum $A \times V$ to produce the residual output vector $$ \operatorname{out}\_{\mathrm{CA}}=W\_{o} A V+b\_{o} $$ which is in turn added to $x\_{\text {class }}$ for subsequent processing.
Given the following machine learning model name: Sparse Sinkhorn Attention, provide a description of the model
**Sparse Sinkhorn Attention** is an attention mechanism that reduces the memory complexity of the [dot-product attention mechanism](https://paperswithcode.com/method/scaled) and is capable of learning sparse attention outputs. It is based on the idea of differentiable sorting of internal representations within the self-attention module. SSA incorporates a meta sorting network that learns to rearrange and sort input sequences. Sinkhorn normalization is used to normalize the rows and columns of the sorting matrix. The actual SSA attention mechanism then acts on the block sorted sequences.
Given the following machine learning model name: Feature-Centric Voting, provide a description of the model
Given the following machine learning model name: RoI Tanh-polar Transform, provide a description of the model
Given the following machine learning model name: NPID++, provide a description of the model
**NPID++** (Non-Parametric Instance Discrimination) is a self-supervision approach that takes a non-parametric classification approach. It approves upon [NPID](https://paperswithcode.com/method/npid) by using more negative samples and training for more epochs.
Given the following machine learning model name: NICE-SLAM: Neural Implicit Scalable Encoding for SLAM, provide a description of the model
NICE-SLAM, a dense RGB-D SLAM system that combines neural implicit decoders with hierarchical grid-based representations, which can be applied to large-scale scenes. Neural implicit representations have recently shown encouraging results in various domains, including promising progress in simultaneous localization and mapping (SLAM). Nevertheless, existing methods produce over-smoothed scene reconstructions and have difficulty scaling up to large scenes. These limitations are mainly due to their simple fully-connected network architecture that does not incorporate local information in the observations. In this paper, we present NICE-SLAM, a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation. Optimizing this representation with pre-trained geometric priors enables detailed reconstruction on large indoor scenes. Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust. Experiments on five challenging datasets demonstrate competitive results of NICE-SLAM in both mapping and tracking quality.
Given the following machine learning model name: Synchronized Batch Normalization, provide a description of the model
**Synchronized Batch Normalization (SyncBN)** is a type of [batch normalization](https://paperswithcode.com/method/batch-normalization) used for multi-GPU training. Standard batch normalization only normalizes the data within each device (GPU). SyncBN normalizes the input within the whole mini-batch.
Given the following machine learning model name: Cascade Corner Pooling, provide a description of the model
**Cascade Corner Pooling** is a pooling layer for object detection that builds upon the [corner pooling](https://paperswithcode.com/method/corner-pooling) operation. Corners are often outside the objects, which lacks local appearance features. [CornerNet](https://paperswithcode.com/method/cornernet) uses corner pooling to address this issue, where we find the maximum values on the boundary directions so as to determine corners. However, it makes corners sensitive to the edges. To address this problem, we need to let corners see the visual patterns of objects. Cascade corner pooling first looks along a boundary to find a boundary maximum value, then looks inside along the location of the boundary maximum value to find an internal maximum value, and finally, add the two maximum values together. By doing this, the corners obtain both the the boundary information and the visual patterns of objects.
Given the following machine learning model name: Baidu Dependency Parser, provide a description of the model
**DDParser**, or **Baidu Dependency Parser**, is a Chinese dependency parser trained on a large-scale manually labeled dataset called Baidu Chinese Treebank (DuCTB). For inputs, for the $i$ th word, its input vector $e_{i}$ is the concatenation of the word embedding and character-level representation: $$ e\_{i}=e\_{i}^{w o r d} \oplus C h a r L S T M\left(w\_{i}\right) $$ Where $\operatorname{CharLSTM}\left(w_{i}\right)$ is the output vectors after feeding the character sequence into a [BiLSTM](https://paperswithcode.com/method/bilstm) layer. The experimental results on DuCTB dataset show that replacing POS tag embeddings with $\operatorname{CharLSTM}\left(w_{i}\right)$ leads to the improvement. For the BiLSTM encoder, three BiLSTM layers are employed over the input vectors for context encoding. Denote $r\_{i}$ the output vector of the top-layer BiLSTM for $w\_{i}$ The dependency parser of [Dozat and Manning](https://arxiv.org/abs/1611.01734) is used. Dimension-reducing MLPs are applied to each recurrent output vector $r\_{i}$ before applying the biaffine transformation. Applying smaller MLPs to the recurrent output states before the biaffine classifier has the advantage of stripping away information not relevant to the current decision. Then biaffine attention is used both in the dependency arc classifier and relation classifier. The computations of all symbols in the Figure are shown below: $$ h_{i}^{d-a r c}=M L P^{d-a r c}\left(r_{i}\right) $$ $$ h_{i}^{h-a r c}=M L P^{h-a r c}\left(r_{i}\right) \\ $$ $$ h_{i}^{d-r e l}=M L P^{d-r e l}\left(r_{i}\right) \\ $$ $$ h_{i}^{h-r e l}=M L P^{h-r e l}\left(r_{i}\right) \\ $$ $$ S^{a r c}=\left(H^{d-a r c} \oplus I\right) U^{a r c} H^{h-a r c} \\ $$ $$ S^{r e l}=\left(H^{d-r e l} \oplus I\right) U^{r e l}\left(\left(H^{h-r e l}\right)^{T} \oplus I\right)^{T} $$ For the decoder, the first-order Eisner algorithm is used to ensure that the output is a projection tree. Based on the dependency tree built by biaffine parser, we get a word sequence through the in-order traversal of the tree. The output is a projection tree only if the word sequence is in order.
Given the following machine learning model name: Masked Convolution, provide a description of the model
A **Masked Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) which masks certain pixels so that the model can only predict based on pixels already seen. This type of convolution was introduced with [PixelRNN](https://paperswithcode.com/method/pixelrnn) generative models, where an image is generated pixel by pixel, to ensure that the model was conditional only on pixels already visited.
Given the following machine learning model name: Retrace, provide a description of the model
**Retrace** is an off-policy Q-value estimation algorithm which has guaranteed convergence for a target and behaviour policy $\left(\pi, \beta\right)$. With off-policy rollout for TD learning, we must use importance sampling for the update: $$ \Delta{Q}^{\text{imp}}\left(S\_{t}, A\_{t}\right) = \gamma^{t}\prod\_{1\leq{\tau}\leq{t}}\frac{\pi\left(A\_{\tau}\mid{S\_{\tau}}\right)}{\beta\left(A\_{\tau}\mid{S\_{\tau}}\right)}\delta\_{t} $$ This product term can lead to high variance, so Retrace modifies $\Delta{Q}$ to have importance weights truncated by no more than a constant $c$: $$ \Delta{Q}^{\text{imp}}\left(S\_{t}, A\_{t}\right) = \gamma^{t}\prod\_{1\leq{\tau}\leq{t}}\min\left(c, \frac{\pi\left(A\_{\tau}\mid{S\_{\tau}}\right)}{\beta\left(A\_{\tau}\mid{S\_{\tau}}\right)}\right)\delta\_{t} $$