prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Graph Contrastive Coding, provide a description of the model
**Graph Contrastive Coding** is a self-supervised graph neural network pre-training framework to capture the universal network topological properties across multiple networks. GCC's pre-training task is designed as subgraph instance discrimination in and across networks and leverages contrastive learning to empower graph neural networks to learn the intrinsic and transferable structural representations.
Given the following machine learning model name: SKEP, provide a description of the model
**SKEP** is a self-supervised pre-training method for sentiment analysis. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation. In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair. SKEP contains two parts: (1) Sentiment masking recognizes the sentiment information of an input sequence based on automatically-mined sentiment knowledge, and produces a corrupted version by removing these informations. (2) Sentiment pre-training objectives require the transformer to recover the removed information from the corrupted version. The three prediction objectives on top are jointly optimized: Sentiment Word (SW) prediction (on $\left.\mathrm{x}\_{9}\right)$, Word Polarity (SP) prediction (on $\mathrm{x}\_{6}$ and $\mathbf{x}\_{9}$ ), Aspect-Sentiment pairs (AP) prediction (on $\mathbf{x}\_{1}$ ). Here, the smiley denotes positive polarity. Notably, on $\mathrm{x}\_{6}$, only SP is calculated without SW, as its original word has been predicted in the pair prediction on $\mathbf{x}\_{1}$.
Given the following machine learning model name: SimCLRv2, provide a description of the model
**SimCLRv2** is a semi-supervised learning method for learning from few labeled examples while making best use of a large amount of unlabeled data. It is a modification of a recently proposed contrastive learning framework, [SimCLR](https://www.paperswithcode.com/method/simclr). It improves upon it in three major ways: 1. To fully leverage the power of general pre-training, larger [ResNet](https://paperswithcode.com/method/resnet) models are explored. Unlike SimCLR and other previous work, whose largest model is ResNet-50 (4×), SimCLRv2 trains models that are deeper but less wide. The largest model trained is a 152 layer ResNet with 3× wider channels and [selective kernels](https://paperswithcode.com/method/selective-kernel-convolution) (SK), a channel-wise attention mechanism that improves the parameter efficiency of the network. By scaling up the model from ResNet-50 to ResNet-152 (3×+SK), a 29% relative improvement is obtained in top-1 accuracy when fine-tuned on 1% of labeled examples. 2. The capacity of the non-linear network $g(·)$ (a.k.a. projection head) is increased, by making it deeper. Furthermore, instead of throwing away $g(·)$ entirely after pre-training as in SimCLR, fine-tuning occurs from a middle layer. This small change yields a significant improvement for both linear evaluation and fine-tuning with only a few labeled examples. Compared to SimCLR with 2-layer projection head, by using a 3-layer projection head and fine-tuning from the 1st layer of projection head, it results in as much as 14% relative improvement in top-1 accuracy when fine-tuned on 1% of labeled examples. 3. The memory mechanism of [MoCo v2](https://paperswithcode.com/method/moco-v2) is incorporated, which designates a memory network (with a moving average of weights for stabilization) whose output will be buffered as negative examples. Since training is based on large mini-batch which already supplies many contrasting negative examples, this change yields an improvement of ∼1% for linear evaluation as well as when fine-tuning on 1% of labeled examples.
Given the following machine learning model name: Root Mean Square Layer Normalization, provide a description of the model
RMSNorm regularizes the summed inputs to a neuron in one layer according to root mean square (RMS), giving the model re-scaling invariance property and implicit learning rate adaptation ability. RMSNorm is computationally simpler and thus more efficient than LayerNorm.
Given the following machine learning model name: Fourier Contour Embedding, provide a description of the model
**Fourier Contour Embedding** is a text instance representation that allows networks to learn diverse text geometry variances. Most of existing methods model text instances in image spatial domain via masks or contour point sequences in the Cartesian or the polar coordinate system. However, the mask representation might lead to expensive post-processing, while the point sequence one may have limited capability to model texts with highly-curved shapes. This motivates modeling text instances in the Fourier domain.
Given the following machine learning model name: FBNet, provide a description of the model
**FBNet** is a type of convolutional neural architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). It utilises a basic type of image model block inspired by [MobileNetv2](https://paperswithcode.com/method/mobilenetv2) that utilises depthwise convolutions and an inverted residual structure (see components).
Given the following machine learning model name: DeLighT Block, provide a description of the model
A **DeLighT Block** is a block used in the [DeLighT](https://paperswithcode.com/method/delight) [transformer](https://paperswithcode.com/method/transformer) architecture. It uses a [DExTra](https://paperswithcode.com/method/dextra) transformation to reduce the dimensionality of the vectors entered into the attention layer, where a [single-headed attention](https://paperswithcode.com/method/single-headed-attention) module is used. Since the DeLighT block learns wider representations of the input across different layers using DExTra, it enables the authors to replace [multi-head attention](https://paperswithcode.com/method/multi-head-attention) with single-head attention. This is then followed by a light-weight FFN which, rather than expanding the dimension (as in normal Transformers which widen to a dimension 4x the size), imposes a bottleneck and squeezes the dimensions. Again, the reason for this is that the DExTra transformation has already incorporated wider representations so we can squeeze instead at this layer.
Given the following machine learning model name: CTRL, provide a description of the model
**CTRL** is conditional [transformer](https://paperswithcode.com/method/transformer) language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence
Given the following machine learning model name: Random elastic image morphing, provide a description of the model
M. Bulacu, A. Brink, T. v. d. Zant and L. Schomaker, "Recognition of Handwritten Numerical Fields in a Large Single-Writer Historical Collection," 2009 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 2009, pp. 808-812, doi: 10.1109/ICDAR.2009.8. Code: https://github.com/GrHound/imagemorph.c In contrast with the EM algorithm (Baum-Welch) for HMMs, training the basic character recognizer for a segmentation-based handwriting recognition system is a tricky issue without a standard solution. Our approach was to collect a labeled base set of digit images segmented by hand and then to augment this data by generating synthetic examples using random geometric distortions. We were incited by the record performance in digit recognition reported in Simard et al. (2003) but developed our own algorithm for this purpose. For every pixel (i,j) of the template image, a random displacement vector ($\Delta x,\Delta y$) is generated. The displacement field of the complete image is smoothed using a Gaussian convolution kernel with standard deviation $\sigma$. The field is finally rescaled to an average amplitude A. The new morphed image (i',j') is generated using the displacement field and bilinear interpolation i'=i+$\Delta$x,j'=j+$\Delta$y. This morphing process is controlled by two parameters: the smoothing radius r and the average pixel displacement D. Both parameters are measured in units of pixels. An intuitive interpretation is to imagine that the characters are written on a rubber sheet and we apply non-uniform random local distortions, contracting one part, while maybe expanding another part of the character (see Fig. 5). This random elastic morphing is more general than affine transforms, providing a rich ensemble of shape variations. We applied it to our base set of labeled digits (~130 samples per class) to obtain a much expanded training dataset (from 1 up to 80 times). The expansion factor f controls the amount of synthetic data: for every base example, f - 1 additional morphed patterns are generated and used in training. This is a cheap method relying on random numbers and basic computer graphics. In this way, a virtually infinite volume of training samples can be fabricated. This stratagem is very successful and does not increase the load at recognition time for parametric classifiers. Essentially, we tum the tables around and, instead of trying to recognize a character garbled in an unpredictable way by the writer in the instantaneous act of handwriting, we generate the deformations ourselves, while training a neural network to become immune to such distortions. The accompanying image, a crop of an RGB page scan, containing the cursive handwritten word 'Zwolle' was morphed a number of times, with parameters dist=1.5, radius=8.5 This distortion is sufficient to introduce a believable variance in the appearance. imagemorph 1.5 8.5 < Zwolle.ppm > Zwolle-morphed.ppm Netpbm image format is common in many CV tools. You can use ImageMagick's convert or other tools to convert to/fro .ppm Also see: P. Simard, D. Steinkraus, and J. Platt. Best practices for convolutional neural networks applied to visual document analysis. In Proc. of 7th ICDAR, pp 958-962, Edinburgh, Scotland, 2003.
Given the following machine learning model name: Blind Image Decomposition Network, provide a description of the model
**BIDeN**, or **Blind Image Decomposition Network**, is a model for blind image decomposition, which requires separating a superimposed image into constituent underlying images in a blind setting, that is, both the source components involved in mixing as well as the mixing mechanism are unknown. For example, rain may consist of multiple components, such as rain streaks, raindrops, snow, and haze. The Figure shows an example where $N = 4, L = 2, x = {a, b, c, d}$, and $I = {1, 3}$. $a, c$ are selected then passed to the mixing function $f$, and outputs the mixed input image $z$, which is $f\left(a, c\right)$ here. The generator consists of an encoder $E$ with three branches and multiple heads $H$. $\bigotimes$ denotes the concatenation operation. Depth and receptive field of each branch is different to capture multiple scales of features. Each specified head points to the corresponding source component, and the number of heads varies with the maximum number of source components N. All reconstructed images $\left(a', c'\right)$ and their corresponding real images $\left(a, c\right)$ are sent to an unconditional discriminator. The discriminator also predicts the source components of the input image $z$. The outputs from other heads $\left(b', d'\right)$ do not contribute to the optimization.
Given the following machine learning model name: Center-pivot convolution, provide a description of the model
Given the following machine learning model name: FASFA: A Novel Next-Generation Backpropagation Optimizer, provide a description of the model
This paper introduces the fast adaptive stochastic function accelerator (FASFA) for gradient-based optimization of stochastic objective functions. It works based on Nesterov-enhanced first and second momentum estimates. The method is simple and effective during implementation because it has intuitive/familiar hyperparameterization. The training dynamics can be progressive or conservative depending on the decay rate sum. It works well with a low learning rate and mini batch size. Experiments and statistics showed convincing evidence that FASFA could be an ideal candidate for optimizing stochastic objective functions, particularly those generated by multilayer perceptrons with convolution and dropout layers. In addition, the convergence properties and regret bound provide results aligning with the online convex optimization framework. In a first of its kind, FASFA addresses the growing need for diverse optimizers by providing next-generation training dynamics for artificial intelligence algorithms. Future experiments could modify FASFA based on the infinity norm.
Given the following machine learning model name: Laplacian Pyramid Network, provide a description of the model
**LapStyle**, or **Laplacian Pyramid Network**, is a feed-forward style transfer method. It uses a [Drafting Network](https://paperswithcode.com/method/drafting-network) to transfer global style patterns in low-resolution, and adopts higher resolution [Revision Networks](https://paperswithcode.com/method/revision-network) to revise local styles in a pyramid manner according to outputs of multi-level Laplacian filtering of the content image. Higher resolution details can be generated by stacking Revision Networks with multiple Laplacian pyramid levels. The final stylized image is obtained by aggregating outputs of all pyramid levels. Specifically, we first generate image pyramid $\left\(\bar{x}\_{c}, r\_{c}\right\)$ from content image $x\_{c}$ with the help of Laplacian filter. Rough low-resolution stylized image are then generated by the Drafting Network. Then the Revision Network generates stylized detail image in high resolution. Then the final stylized image is generated by aggregating the outputs pyramid. $L, C$ and $A$ in an image represent Laplacian, concatenate and aggregation operation separately.
Given the following machine learning model name: rnnDrop, provide a description of the model
**rnnDrop** is a [dropout](https://paperswithcode.com/method/dropout) based regularization technique for [recurrent neural networks](https://paperswithcode.com/methods/category/recurrent-neural-networks). It amounts to using the same dropout mask at every timestep. It drops both the non-recurrent and recurrent connections. A simple figure to explain the idea is shown to the right. The figure shows an RNN being trained with rnnDrop for three frames $\left(t-1, t, t+1\right)$ on two different training sequences in the data (denoted as ‘sequence1’ and ‘sequence2’). The black circles denote the randomly omitted hidden nodes during training, and the dotted arrows stand for the model weights connected to those omitted nodes. *From: RnnDrop: A Novel Dropout for RNNs in ASR by Moon et al*
Given the following machine learning model name: VideoBERT, provide a description of the model
VideoBERT adapts the powerful [BERT](https://paperswithcode.com/method/bert) model to learn a joint visual-linguistic representation for video. It is used in numerous tasks, including action classification and video captioning.
Given the following machine learning model name: Gated Attention Networks, provide a description of the model
Gated Attention Networks (GaAN) is a new architecture for learning on graphs. Unlike the traditional multi-head attention mechanism, which equally consumes all attention heads, GaAN uses a convolutional sub-network to control each attention head’s importance. Image credit: [GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs](https://paperswithcode.com/paper/gaan-gated-attention-networks-for-learning-on)
Given the following machine learning model name: High-level backbone, provide a description of the model
Given the following machine learning model name: DSelect-k, provide a description of the model
**DSelect-k** is a continuously differentiable and sparse gate for Mixture-of-experts (MoE), based on a novel binary encoding formulation. Given a user-specified parameter $k$, the gate selects at most $k$ out of the $n$ experts. The gate can be trained using first-order methods, such as stochastic gradient descent, and offers explicit control over the number of experts to select. This explicit control over sparsity leads to a cardinality-constrained optimization problem, which is computationally challenging. To circumvent this challenge, the authors use a unconstrained reformulation that is equivalent to the original problem. The reformulated problem uses a binary encoding scheme to implicitly enforce the cardinality constraint. By carefully smoothing the binary encoding variables, the reformulated problem can be effectively optimized using first-order methods such as [SGD](https://paperswithcode.com/method/sgd). The motivation for this method is that existing sparse gates, such as Top-k, are not smooth. The lack of smoothness can lead to convergence and statistical performance issues when training with gradient-based methods.
Given the following machine learning model name: RFB Net, provide a description of the model
**RFB Net** is a one-stage object detector that utilises a receptive field block module. It utilises a VGG16 backbone, and is otherwise quite similar to the [SSD](https://paperswithcode.com/method/ssd) architecture.
Given the following machine learning model name: VOS, provide a description of the model
**VOS** is a type of video object segmentation model consisting of two network components. The target appearance model consists of a light-weight module, which is learned during the inference stage using fast optimization techniques to predict a coarse but robust target segmentation. The segmentation model is exclusively trained offline, designed to process the coarse scores into high quality segmentation masks.
Given the following machine learning model name: Extremely Efficient Spatial Pyramid of Depth-wise Dilated Separable Convolutions, provide a description of the model
An **EESP Unit**, or Extremely Efficient Spatial Pyramid of Depth-wise Dilated Separable Convolutions, is an image model block designed for edge devices. It was proposed as part of the [ESPNetv2](https://paperswithcode.com/method/espnetv2) CNN architecture. This building block is based on a reduce-split-transform-merge strategy. The EESP unit first projects the high-dimensional input feature maps into low-dimensional space using groupwise pointwise convolutions and then learns the representations in parallel using depthwise dilated separable convolutions with different dilation rates. Different dilation rates in each branch allow the EESP unit to learn the representations from a large effective receptive field. To remove the gridding artifacts caused by dilated convolutions, the EESP fuses the feature maps using [hierarchical feature fusion](https://paperswithcode.com/method/hierarchical-feature-fusion) (HFF).
Given the following machine learning model name: DifferNet, provide a description of the model
Given the following machine learning model name: Focal Transformers, provide a description of the model
The **focal self-attention** is built to make Transformer layers scalable to high-resolution inputs. Instead of attending all tokens at fine-grain, the approach attends the fine-grain tokens only locally, but the summarized ones globally. As such, it can cover as many regions as standard self-attention but with much less cost. An image is first partitioned into patches, resulting in visual tokens. Then a patch embedding layer, consisting of a convolutional layer with filter and stride of same size, to project the patches into hidden features. This spatial feature map in then passed to four stages of focal Transformer blocks. Each focal Transformer block consists of $N_i$ focal Transformer layers. Patch embedding layers are used in between to reduce spatial size of feature map by factor 2, while feature dimension increased by 2.
Given the following machine learning model name: Mixture Normalization, provide a description of the model
**Mixture Normalization** is normalization technique that relies on an approximation of the probability density function of the internal representations. Any continuous distribution can be approximated with arbitrary precision using a Gaussian Mixture Model (GMM). Hence, instead of computing one set of statistical measures from the entire population (of instances in the mini-batch) as [Batch Normalization](https://paperswithcode.com/method/batch-normalization) does, Mixture Normalization works on sub-populations which can be identified by disentangling modes of the distribution, estimated via GMM. While BN can only scale and/or shift the whole underlying probability density function, mixture normalization operates like a soft piecewise normalizing transform, capable of completely re-structuring the data distribution by independently scaling and/or shifting individual modes of distribution.
Given the following machine learning model name: Synthetic Minority Over-sampling Technique., provide a description of the model
Perhaps the most widely used approach to synthesizing new examples is called the Synthetic Minority Oversampling Technique, or SMOTE for short. This technique was described by Nitesh Chawla, et al. in their 2002 paper named for the technique titled “SMOTE: Synthetic Minority Over-sampling Technique.” SMOTE works by selecting examples that are close in the feature space, drawing a line between the examples in the feature space and drawing a new sample at a point along that line.
Given the following machine learning model name: Trans-Encoder, provide a description of the model
Unsupervised knowledge distillation from a pretrained language model to *itself*, by alternating between its bi- and cross-encoder forms.
Given the following machine learning model name: Class-activation map, provide a description of the model
Class activation maps could be used to interpret the prediction decision made by the convolutional neural network (CNN). Image source: [Learning Deep Features for Discriminative Localization](https://paperswithcode.com/paper/learning-deep-features-for-discriminative)
Given the following machine learning model name: DenseNet, provide a description of the model
A **DenseNet** is a type of convolutional neural network that utilises [dense connections](https://paperswithcode.com/method/dense-connections) between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers.
Given the following machine learning model name: AMSGrad, provide a description of the model
**AMSGrad** is a stochastic optimization method that seeks to fix a convergence issue with [Adam](https://paperswithcode.com/method/adam) based optimizers. AMSGrad uses the maximum of past squared gradients $v\_{t}$ rather than the exponential average to update the parameters: $$m\_{t} = \beta\_{1}m\_{t-1} + \left(1-\beta\_{1}\right)g\_{t} $$ $$v\_{t} = \beta\_{2}v\_{t-1} + \left(1-\beta\_{2}\right)g\_{t}^{2}$$ $$ \hat{v}\_{t} = \max\left(\hat{v}\_{t-1}, v\_{t}\right) $$ $$\theta\_{t+1} = \theta\_{t} - \frac{\eta}{\sqrt{\hat{v}_{t}} + \epsilon}m\_{t}$$
Given the following machine learning model name: Herring, provide a description of the model
**Herring** is a parameter server based distributed training method. It combines AWS's Elastic Fabric [Adapter](https://paperswithcode.com/method/adapter) (EFA) with a novel parameter sharding technique that makes better use of the available network bandwidth. Herring uses EFA and balanced fusion buffer to optimally use the total bandwidth available across all nodes in the cluster. Herring reduces gradients hierarchically, reducing them inside the node first and then reducing across nodes. This enables more efficient use of PCIe bandwidth in the node and helps keep the gradient averaging related burden on GPU low.
Given the following machine learning model name: Self-Calibrated Convolutions, provide a description of the model
Liu et al. presented self-calibrated convolution as a means to enlarge the receptive field at each spatial location. Self-calibrated convolution is used together with a standard convolution. It first divides the input feature $X$ into $X_{1}$ and $X_{2}$ in the channel domain. The self-calibrated convolution first uses average pooling to reduce the input size and enlarge the receptive field: \begin{align} T_{1} = AvgPool_{r}(X_{1}) \end{align} where $r$ is the filter size and stride. Then a convolution is used to model the channel relationship and a bilinear interpolation operator $Up$ is used to upsample the feature map: \begin{align} X'_{1} = \text{Up}(Conv_2(T_1)) \end{align} Next, element-wise multiplication finishes the self-calibrated process: \begin{align} Y'_{1} = Conv_3(X_1) \sigma(X_1 + X'_1) \end{align} Finally, the output feature map of is formed: \begin{align} Y_{1} &= Conv_4(Y'_{1}) \end{align} \begin{align} Y_2 &= Conv_1(X_2) \end{align} \begin{align} Y &= [Y_1; Y_2] \end{align} Such self-calibrated convolution can enlarge the receptive field of a network and improve its adaptability. It achieves excellent results in image classification and certain downstream tasks such as instance segmentation, object detection and keypoint detection.
Given the following machine learning model name: Spatial-Reduction Attention, provide a description of the model
**Spatial-Reduction Attention**, or **SRA**, is a [multi-head attention](https://paperswithcode.com/method/multi-head-attention) module used in the [Pyramid Vision Transformer](https://paperswithcode.com/method/pvt) architecture which reduces the spatial scale of the key $K$ and value $V$ before the attention operation. This reduces the computational/memory overhead. Details of the SRA in the stage $i$ can be formulated as follows: $$ \text{SRA}(Q, K, V)=\text { Concat }\left(\operatorname{head}\_{0}, \ldots \text { head }\_{N\_{i}}\right) W^{O} $$ $$\text{ head}\_{j}=\text { Attention }\left(Q W\_{j}^{Q}, \operatorname{SR}(K) W\_{j}^{K}, \operatorname{SR}(V) W\_{j}^{V}\right) $$ where Concat $(\cdot)$ is the concatenation operation. $W\_{j}^{Q} \in \mathbb{R}^{C\_{i} \times d\_{\text {head }}}$, $W\_{j}^{K} \in \mathbb{R}^{C\_{i} \times d\_{\text {head }}}$, $W\_{j}^{V} \in \mathbb{R}^{C\_{i} \times d\_{\text {head }}}$, and $W^{O} \in \mathbb{R}^{C\_{i} \times C\_{i}}$ are linear projection parameters. $N\_{i}$ is the head number of the attention layer in Stage $i$. Therefore, the dimension of each head (i.e. $\left.d\_{\text {head }}\right)$ is equal to $\frac{C\_{i}}{N\_{i}} . \text{SR}(\cdot)$ is the operation for reducing the spatial dimension of the input sequence ($K$ or $V$ ), which is written as: $$ \text{SR}(\mathbf{x})=\text{Norm}\left(\operatorname{Reshape}\left(\mathbf{x}, R\_{i}\right) W^{S}\right) $$ Here, $\mathbf{x} \in \mathbb{R}^{\left(H\_{i} W\_{i}\right) \times C\_{i}}$ represents a input sequence, and $R\_{i}$ denotes the reduction ratio of the attention layers in Stage $i .$ Reshape $\left(\mathbf{x}, R\_{i}\right)$ is an operation of reshaping the input sequence $\mathbf{x}$ to a sequence of size $\frac{H\_{i} W\_{i}}{R\_{i}^{2}} \times\left(R\_{i}^{2} C\_{i}\right)$. $W\_{S} \in \mathbb{R}^{\left(R\_{i}^{2} C\_{i}\right) \times C\_{i}}$ is a linear projection that reduces the dimension of the input sequence to $C\_{i}$. $\text{Norm}(\cdot)$ refers to layer normalization.
Given the following machine learning model name: Tokens-To-Token Vision Transformer, provide a description of the model
**T2T-ViT** (Tokens-To-Token Vision Transformer) is a type of [Vision Transformer](https://paperswithcode.com/method/vision-transformer) which incorporates 1) a layerwise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision [transformer](https://paperswithcode.com/method/transformer) motivated by CNN architecture design after empirical study.
Given the following machine learning model name: Tacotron2, provide a description of the model
**Tacotron 2** is a neural network architecture for speech synthesis directly from text. It consists of two components: - a recurrent sequence-to-sequence feature prediction network with attention which predicts a sequence of mel spectrogram frames from an input character sequence - a modified version of [WaveNet](https://paperswithcode.com/method/wavenet) which generates time-domain waveform samples conditioned on the predicted mel spectrogram frames In contrast to the original [Tacotron](https://paperswithcode.com/method/tacotron), Tacotron 2 uses simpler building blocks, using vanilla [LSTM](https://paperswithcode.com/method/lstm) and convolutional layers in the encoder and decoder instead of [CBHG](https://paperswithcode.com/method/cbhg) stacks and [GRU](https://paperswithcode.com/method/gru) recurrent layers. Tacotron 2 does not use a “reduction factor”, i.e., each decoder step corresponds to a single spectrogram frame. Location-sensitive attention is used instead of [additive attention](https://paperswithcode.com/method/additive-attention).
Given the following machine learning model name: Hierarchical Feature Fusion, provide a description of the model
**Hierarchical Feature Fusion (HFF)** is a feature fusion method employed in [ESP](https://paperswithcode.com/method/esp) and [EESP](https://paperswithcode.com/method/eesp) image model blocks for degridding. In the ESP module, concatenating the outputs of dilated convolutions gives the ESP module a large effective receptive field, but it introduces unwanted checkerboard or gridding artifacts. To address the gridding artifact in ESP, the feature maps obtained using kernels of different dilation rates are hierarchically added before concatenating them (HFF). This solution is simple and effective and does not increase the complexity of the ESP module.
Given the following machine learning model name: Bottleneck Attention Module, provide a description of the model
Park et al. proposed the bottleneck attention module (BAM), aiming to efficiently improve the representational capability of networks. It uses dilated convolution to enlarge the receptive field of the spatial attention sub-module, and build a bottleneck structure as suggested by ResNet to save computational cost. For a given input feature map $X$, BAM infers the channel attention $s_c \in \mathbb{R}^C$ and spatial attention $s_s\in \mathbb{R}^{H\times W}$ in two parallel streams, then sums the two attention maps after resizing both branch outputs to $\mathbb{R}^{C\times H \times W}$. The channel attention branch, like an SE block, applies global average pooling to the feature map to aggregate global information, and then uses an MLP with channel dimensionality reduction. In order to utilize contextual information effectively, the spatial attention branch combines a bottleneck structure and dilated convolutions. Overall, BAM can be written as \begin{align} s_c &= \text{BN}(W_2(W_1\text{GAP}(X)+b_1)+b_2) \end{align} \begin{align} s_s &= BN(Conv_2^{1 \times 1}(DC_2^{3\times 3}(DC_1^{3 \times 3}(Conv_1^{1 \times 1}(X))))) \end{align} \begin{align} s &= \sigma(\text{Expand}(s_s)+\text{Expand}(s_c)) \end{align} \begin{align} Y &= s X+X \end{align} where $W_i$, $b_i$ denote weights and biases of fully connected layers respectively, $Conv_{1}^{1\times 1}$ and $Conv_{2}^{1\times 1}$ are convolution layers used for channel reduction. $DC_i^{3\times 3}$ denotes a dilated convolution with $3\times 3$ kernel, applied to utilize contextual information effectively. $\text{Expand}$ expands the attention maps $s_s$ and $s_c$ to $\mathbb{R}^{C\times H\times W}$. BAM can emphasize or suppress features in both spatial and channel dimensions, as well as improving the representational power. Dimensional reduction applied to both channel and spatial attention branches enables it to be integrated with any convolutional neural network with little extra computational cost. However, although dilated convolutions enlarge the receptive field effectively, it still fails to capture long-range contextual information as well as encoding cross-domain relationships.
Given the following machine learning model name: Contrastive Predictive Coding, provide a description of the model
**Contrastive Predictive Coding (CPC)** learns self-supervised representations by predicting the future in latent space by using powerful autoregressive models. The model uses a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. First, a non-linear encoder $g\_{enc}$ maps the input sequence of observations $x\_{t}$ to a sequence of latent representations $z\_{t} = g\_{enc}\left(x\_{t}\right)$, potentially with a lower temporal resolution. Next, an autoregressive model $g\_{ar}$ summarizes all $z\leq{t}$ in the latent space and produces a context latent representation $c\_{t} = g\_{ar}\left(z\leq{t}\right)$. A density ratio is modelled which preserves the mutual information between $x\_{t+k}$ and $c\_{t}$ as follows: $$ f\_{k}\left(x\_{t+k}, c\_{t}\right) \propto \frac{p\left(x\_{t+k}|c\_{t}\right)}{p\left(x\_{t+k}\right)} $$ where $\propto$ stands for ’proportional to’ (i.e. up to a multiplicative constant). Note that the density ratio $f$ can be unnormalized (does not have to integrate to 1). The authors use a simple log-bilinear model: $$ f\_{k}\left(x\_{t+k}, c\_{t}\right) = \exp\left(z^{T}\_{t+k}W\_{k}c\_{t}\right) $$ Any type of autoencoder and autoregressive can be used. An example the authors opt for is strided convolutional layers with residual blocks and GRUs. The autoencoder and autoregressive models are trained to minimize an [InfoNCE](https://paperswithcode.com/method/infonce) loss (see components).
Given the following machine learning model name: WaveRNN, provide a description of the model
**WaveRNN** is a single-layer recurrent neural network for audio generation that is designed efficiently predict 16-bit raw audio samples. The overall computation in the WaveRNN is as follows (biases omitted for brevity): $$ \mathbf{x}\_{t} = \left[\mathbf{c}\_{t−1},\mathbf{f}\_{t−1}, \mathbf{c}\_{t}\right] $$ $$ \mathbf{u}\_{t} = \sigma\left(\mathbf{R}\_{u}\mathbf{h}\_{t-1} + \mathbf{I}^{*}\_{u}\mathbf{x}\_{t}\right) $$ $$ \mathbf{r}\_{t} = \sigma\left(\mathbf{R}\_{r}\mathbf{h}\_{t-1} + \mathbf{I}^{*}\_{r}\mathbf{x}\_{t}\right) $$ $$ \mathbf{e}\_{t} = \tau\left(\mathbf{r}\_{t} \odot \left(\mathbf{R}\_{e}\mathbf{h}\_{t-1}\right) + \mathbf{I}^{*}\_{e}\mathbf{x}\_{t} \right) $$ $$ \mathbf{h}\_{t} = \mathbf{u}\_{t} \cdot \mathbf{h}\_{t-1} + \left(1-\mathbf{u}\_{t}\right) \cdot \mathbf{e}\_{t} $$ $$ \mathbf{y}\_{c}, \mathbf{y}\_{f} = \text{split}\left(\mathbf{h}\_{t}\right) $$ $$ P\left(\mathbf{c}\_{t}\right) = \text{softmax}\left(\mathbf{O}\_{2}\text{relu}\left(\mathbf{O}\_{1}\mathbf{y}\_{c}\right)\right) $$ $$ P\left(\mathbf{f}\_{t}\right) = \text{softmax}\left(\mathbf{O}\_{4}\text{relu}\left(\mathbf{O}\_{3}\mathbf{y}\_{f}\right)\right) $$ where the $*$ indicates a masked matrix whereby the last coarse input $\mathbf{c}\_{t}$ is only connected to the fine part of the states $\mathbf{u}\_{t}$, $\mathbf{r}\_{t}$, $\mathbf{e}\_{t}$ and $\mathbf{h}\_{t}$ and thus only affects the fine output $\mathbf{y}\_{f}$. The coarse and fine parts $\mathbf{c}\_{t}$ and $\mathbf{f}\_{t}$ are encoded as scalars in $\left[0, 255\right]$ and scaled to the interval $\left[−1, 1\right]$. The matrix $\mathbf{R}$ formed from the matrices $\mathbf{R}\_{u}$, $\mathbf{R}\_{r}$, $\mathbf{R}\_{e}$ is computed as a single matrix-vector product to produce the contributions to all three gates $\mathbf{u}\_{t}$, $mathbf{r}\_{t}$ and $\mathbf{e}\_{t}$ (a variant of the [GRU cell](https://paperswithcode.com/method/gru). $\sigma$ and $\tau$ are the standard sigmoid and tanh non-linearities. Each part feeds into a [softmax](https://paperswithcode.com/method/softmax) layer over the corresponding 8 bits and the prediction of the 8 fine bits is conditioned on the 8 coarse bits. The resulting Dual Softmax layer allows for efficient prediction of 16-bit samples using two small output spaces (2 8 values each) instead of a single large output space (with 2 16 values).
Given the following machine learning model name: Grouped-query attention, provide a description of the model
**Grouped-query attention** an interpolation of multi-query and multi-head attention that achieves quality close to multi-head at comparable speed to multi-query attention.
Given the following machine learning model name: StyleGAN, provide a description of the model
**StyleGAN** is a type of generative adversarial network. It uses an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature; in particular, the use of [adaptive instance normalization](https://paperswithcode.com/method/adaptive-instance-normalization). Otherwise it follows Progressive [GAN](https://paperswithcode.com/method/gan) in using a progressively growing training regime. Other quirks include the fact it generates from a fixed value tensor not stochastically generated latent variables as in regular GANs. The stochastically generated latent variables are used as style vectors in the adaptive [instance normalization](https://paperswithcode.com/method/instance-normalization) at each resolution after being transformed by an 8-layer [feedforward network](https://paperswithcode.com/method/feedforward-network). Lastly, it employs a form of regularization called mixing regularization, which mixes two style latent variables during training.
Given the following machine learning model name: Multi-band MelGAN, provide a description of the model
**Multi-band MelGAN**, or **MB-MelGAN**, is a waveform generation model focusing on high-quality text-to-speech. It improves the original [MelGAN](https://paperswithcode.com/method/melgan) in several ways. First, it increases the receptive field of the generator, which is proven to be beneficial to speech generation. Second, it substitutes the feature matching loss with the multi-resolution STFT loss to better measure the difference between fake and real speech. Lastly, [MelGAN](https://paperswithcode.com/method/melgan) is extended with multi-band processing: the generator takes mel-spectrograms as input and produces sub-band signals which are subsequently summed back to full-band signals as discriminator input.
Given the following machine learning model name: pixel2style2pixel, provide a description of the model
**Pixel2Style2Pixel**, or **pSp**, is an image-to-image translation framework that is based on a novel encoder that directly generates a series of style vectors which are fed into a pretrained [StyleGAN](https://paperswithcode.com/method/stylegan) generator, forming the extended $\mathcal{W+}$ latent space. Feature maps are first extracted using a standard feature pyramid over a [ResNet](https://paperswithcode.com/method/resnet) backbone. Then, for each of $18$ target styles, a small mapping network is trained to extract the learned styles from the corresponding feature map, where styles $(0-2)$ are generated from the small feature map, $(3-6)$ from the medium feature map, and $(7-18)$ from the largest feature map. The mapping network, map2style, is a small fully convolutional network, which gradually reduces spatial size using a set of 2-strided convolutions followed by [LeakyReLU](https://paperswithcode.com/method/leaky-relu) activations. Each generated 512 vector, is fed into [StyleGAN](https://paperswithcode.com/method/stylegan), starting from its matching affine transformation, $A$.
Given the following machine learning model name: Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets, provide a description of the model
To obtain excellent deep neural architectures, a series of techniques are carefully designed in EfficientNets. The giant formula for simultaneously enlarging the resolution, depth and width provides us a Rubik's cube for neural networks. So that we can find networks with high efficiency and excellent performance by twisting the three dimensions. This paper aims to explore the twisting rules for obtaining deep neural networks with minimum model sizes and computational costs. Different from the network enlarging, we observe that resolution and depth are more important than width for tiny networks. Therefore, the original method, i.e., the compound scaling in [EfficientNet](https://paperswithcode.com/method/efficientnet) is no longer suitable. To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint. Experimental results on the ImageNet benchmark illustrate that our TinyNet performs much better than the smaller version of EfficientNets using the inversed giant formula. For instance, our TinyNet-E achieves a 59.9% Top-1 accuracy with only 24M FLOPs, which is about 1.9% higher than that of the previous best [MobileNetV3](https://paperswithcode.com/method/mobilenetv3) with similar computational cost.
Given the following machine learning model name: ResNeXt, provide a description of the model
A **ResNeXt** repeats a building block that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width. Formally, a set of aggregated transformations can be represented as: $\mathcal{F}(x)=\sum_{i=1}^{C}\mathcal{T}_i(x)$, where $\mathcal{T}_i(x)$ can be an arbitrary function. Analogous to a simple neuron, $\mathcal{T}_i$ should project $x$ into an (optionally low-dimensional) embedding and then transform it.
Given the following machine learning model name: OODformer, provide a description of the model
OODformer is a [transformer](https://paperswithcode.com/method/transformer)-based OOD detection architecture that leverages the contextualization capabilities of the transformer. Incorporating the transformer as the principal feature extractor allows to exploit the object concepts and their discriminate attributes along with their co-occurrence via [visual attention](https://paperswithcode.com/method/visual-attention). OODformer employs [ViT](method/vision-transformer) and its data efficient variant [DeiT](/method/deit). Each encoder layer consist of multi-head self attention and a multi-layer perception block. The combination of MSA and MLP layers in the encoder jointly encode the attributes' importance, associated correlation, and co-occurrence. The [class] token (a representative of an image $x$) consolidated multiple attributes and their related features via the global context. The [class] token from the final layer is used for OOD detection in two ways; first, it is passed to $ F_{\text {classifier }}\left(x_{\text {feat }}\right)$ for softmax confidence score, and second it is used for latent space distance calculation.
Given the following machine learning model name: Principal Neighbourhood Aggregation, provide a description of the model
**Principal Neighbourhood Aggregation** (PNA) is a general and flexible architecture for graphs combining multiple aggregators with degree-scalers (which generalize the sum aggregator).
Given the following machine learning model name: GradientDICE, provide a description of the model
**GradientDICE** is a density ratio learning method for estimating the density ratio between the state distribution of the target policy and the sampling distribution in off-policy reinforcement learning. It optimizes a different objective from [GenDICE](https://arxiv.org/abs/2002.09072) by using the Perron-Frobenius theorem and eliminating GenDICE’s use of divergence, such that nonlinearity in parameterization is not necessary for GradientDICE, which is provably convergent under linear function approximation.
Given the following machine learning model name: HRNet, provide a description of the model
**HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution [convolution](https://paperswithcode.com/method/convolution) stream, gradually add high-to-low resolution convolution streams one by one, and connect the multi-resolution streams in parallel. The resulting network consists of several ($4$ in the paper) stages and the $n$th stage contains $n$ streams corresponding to $n$ resolutions. The authors conduct repeated multi-resolution fusions by exchanging the information across the parallel streams over and over.
Given the following machine learning model name: mT5, provide a description of the model
**mt5** is a multilingual variant of [T5](https://paperswithcode.com/method/t5) that was pre-trained on a new Common Crawl-based dataset covering $101$ languages.
Given the following machine learning model name: Data augmentation using Polya-Gamma latent variables., provide a description of the model
This method applies Polya-Gamma latent variables as a way to obtain closed form expressions for full-conditionals of posterior distributions in sampling algorithms like MCMC.
Given the following machine learning model name: WideResNet, provide a description of the model
**Wide Residual Networks** are a variant on [ResNets](https://paperswithcode.com/method/resnet) where we decrease depth and increase the width of residual networks. This is achieved through the use of wide residual blocks.
Given the following machine learning model name: GShard, provide a description of the model
**GShard** is a intra-layer parallel distributed method. It consists of set of simple APIs for annotations, and a compiler extension in XLA for automatic parallelization.
Given the following machine learning model name: GrowNet, provide a description of the model
**GrowNet** is a novel approach to combine the power of gradient boosting to incrementally build complex deep neural networks out of shallow components. It introduces a versatile framework that can readily be adapted for a diverse range of machine learning tasks in a wide variety of domains.
Given the following machine learning model name: Gradient Quantization with Adaptive Levels/Multiplier, provide a description of the model
Many communication-efficient variants of [SGD](https://paperswithcode.com/method/sgd) use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters.
Given the following machine learning model name: Multi-head of Mixed Attention, provide a description of the model
Multi-heads of both self and cross attentions
Given the following machine learning model name: Lightweight Convolution, provide a description of the model
**LightConv** is a type of [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) for sequential modelling which shares certain output channels and whose weights are normalized across the temporal dimension using a [softmax](https://paperswithcode.com/method/softmax). Compared to self-attention, LightConv has a fixed context window and it determines the importance of context elements with a set of weights that do not change over time steps. LightConv computes the following for the $i$-th element in the sequence and output channel $c$: $$ \text{LightConv}\left(X, W\_{\text{ceil}\left(\frac{cH}{d}\right),:}, i, c\right) = \text{DepthwiseConv}\left(X,\text{softmax}\left(W\_{\text{ceil}\left(\frac{cH}{d}\right),:}\right), i, c\right) $$
Given the following machine learning model name: PointRend, provide a description of the model
**PointRend** is a module for image segmentation tasks, such as instance and semantic segmentation, that attempts to treat segmentation as image rending problem to efficiently "render" high-quality label maps. It uses a subdivision strategy to adaptively select a non-uniform set of points at which to compute labels. PointRend can be incorporated into popular meta-architectures for both instance segmentation (e.g. [Mask R-CNN](https://paperswithcode.com/method/mask-r-cnn)) and semantic segmentation (e.g. [FCN](https://paperswithcode.com/method/fcn)). Its subdivision strategy efficiently computes high-resolution segmentation maps using an order of magnitude fewer floating-point operations than direct, dense computation. PointRend is a general module that admits many possible implementations. Viewed abstractly, a PointRend module accepts one or more typical CNN feature maps $f\left(x\_{i}, y\_{i}\right)$ that are defined over regular grids, and outputs high-resolution predictions $p\left(x^{'}\_{i}, y^{'}\_{i}\right)$ over a finer grid. Instead of making excessive predictions over all points on the output grid, PointRend makes predictions only on carefully selected points. To make these predictions, it extracts a point-wise feature representation for the selected points by interpolating $f$, and uses a small point head subnetwork to predict output labels from the point-wise features.
Given the following machine learning model name: Sample Redistribution, provide a description of the model
**Sample Redistribution** is a [data augmentation](https://paperswithcode.com/methods/category/image-data-augmentation) technique for face detection which augments training samples based on the statistics of benchmark datasets via large-scale cropping. During training data augmentation, square patches are cropped from the original images with a random size from the set $[0.3,1.0]$ of the short edge of the original images. To generate more positive samples for stride 8, the random size range is enlarged from $[0.3,1.0]$ to $[0.3,2.0]$. When the crop box is beyond the original image, average RGB values fill the missing pixels. The motivation is that for efficient [face detection](https://paperswithcode.com/task/face-detection) under a fixed VGA resolution (i.e. 640×480), most of the faces (78.93%) in [WIDER FACE](https://paperswithcode.com/dataset/wider-face-1) are smaller than 32×32 pixels, and thus they are predicted by shallow stages. To obtain more training samples for these shallow stages, Sample Redistribution (SR) is used.
Given the following machine learning model name: Single Headed Attention RNN, provide a description of the model
**SHA-RNN**, or **Single Headed Attention RNN**, is a recurrent neural network, and language model when combined with an embedding input and [softmax](https://paperswithcode.com/method/softmax) classifier, based on a core [LSTM](https://paperswithcode.com/method/lstm) component and a [single-headed attention](https://paperswithcode.com/method/single-headed-attention) module. Other design choices include a Boom feedforward layer and the use of [layer normalization](https://paperswithcode.com/method/layer-normalization). The guiding principles of the author were to ensure simplicity in the architecture and to keep computational costs bounded (the model was originally trained with a single GPU).
Given the following machine learning model name: Flan-T5, provide a description of the model
**Flan-T5** is the instruction fine-tuned version of **T5** or **Text-to-Text Transfer Transformer** Language Model.
Given the following machine learning model name: Logistic Regression, provide a description of the model
**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function. Source: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression) Image: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)
Given the following machine learning model name: DeepLab, provide a description of the model
**DeepLab** is a semantic segmentation architecture. First, the input image goes through the network with the use of dilated convolutions. Then the output from the network is bilinearly interpolated and goes through the fully connected [CRF](https://paperswithcode.com/method/crf) to fine tune the result we obtain the final predictions.
Given the following machine learning model name: Growing Cosine Unit, provide a description of the model
An oscillatory function defined as $x \cdot cos(x)$ that reports better performance than Sigmoid, Mish, Swish, and ReLU on several benchmarks.
Given the following machine learning model name: Gradient Checkpointing, provide a description of the model
**Gradient Checkpointing** is a method used for reducing the memory footprint when training deep neural networks, at the cost of having a small increase in computation time.
Given the following machine learning model name: SRGAN Residual Block, provide a description of the model
**SRGAN Residual Block** is a residual block used in the [SRGAN](https://paperswithcode.com/method/srgan) generator for image super-resolution. It is similar to standard [residual blocks](https://paperswithcode.com/method/residual-block), although it uses a [PReLU](https://paperswithcode.com/method/prelu) activation function to help training (preventing sparse gradients during [GAN](https://paperswithcode.com/method/gan) training).
Given the following machine learning model name: NoisyNet-DQN, provide a description of the model
**NoisyNet-DQN** is a modification of a [DQN](https://paperswithcode.com/method/dqn) that utilises noisy linear layers for exploration instead of $\epsilon$-greedy exploration as in the original DQN formulation.
Given the following machine learning model name: Wasserstein GAN, provide a description of the model
**Wasserstein GAN**, or **WGAN**, is a type of generative adversarial network that minimizes an approximation of the Earth-Mover's distance (EM) rather than the Jensen-Shannon divergence as in the original [GAN](https://paperswithcode.com/method/gan) formulation. It leads to more stable training than original GANs with less evidence of mode collapse, as well as meaningful curves that can be used for debugging and searching hyperparameters.
Given the following machine learning model name: Mutual Information Machine/Mask Image Modeling, provide a description of the model
Given the following machine learning model name: Hardtanh Activation, provide a description of the model
**Hardtanh** is an activation function used for neural networks: $$ f\left(x\right) = -1 \text{ if } x < - 1 $$ $$ f\left(x\right) = x \text{ if } -1 \leq x \leq 1 $$ $$ f\left(x\right) = 1 \text{ if } x > 1 $$ It is a cheaper and more computationally efficient version of the [tanh activation](https://paperswithcode.com/method/tanh-activation). Image Source: [Zhuan Lan](https://zhuanlan.zhihu.com/p/30385380)
Given the following machine learning model name: MoBY, provide a description of the model
**MoBY** is a self-supervised learning approach for [Vision Transformers](methods/category/vision-transformer). The approach is basically a combination of [MoCo v2](https://paperswithcode.com/method/moco-v2) and [BYOL](https://paperswithcode.com/method/byol). It inherits the momentum design, the key queue, and the contrastive loss used in MoCo v2, and inherits the asymmetric encoders, asymmetric data augmentations and the momentum scheduler in BYOL. It is named MoBY by picking the first two letters of each method. The MoBY approach is illustrated in the Figure. There are two encoders: an online encoder and a target encoder. Both two encoders consist of a backbone and a projector head ([2-layer MLP](https://paperswithcode.com/method/feedforward-network)), and the online encoder introduces an additional prediction head (2-layer MLP), which makes the two encoders asymmetric. The online encoder is updated by gradients, and the target encoder is a moving average of the online encoder by momentum updating in each training iteration. A gradually increasing momentum updating strategy is applied for on the target encoder: the value of momentum term is gradually increased to 1 during the course of training. The default starting value is $0.99$. A contrastive loss is applied to learn the representations. Specifically, for an online view $q$, its contrastive loss is computed as $$ \mathcal{L}\_{q}=-\log \frac{\exp \left(q \cdot k\_{+} / \tau\right)}{\sum\_{i=0}^{K} \exp \left(q \cdot k\_{i} / \tau\right)} $$ where $k\_{+}$is the target feature for the other view of the same image; $k\_{i}$ is a target feature in the key queue; $\tau$ is a temperature term; $K$ is the size of the key queue (4096 by default). In training, like most [Transformer-based methods](https://paperswithcode.com/methods/category/transformers), the [AdamW](https://paperswithcode.com/method/adamw) optimizer is used, in contrast to previous [self-supervised learning approaches](https://paperswithcode.com/methods/category/self-supervised-learning) built on [ResNet](https://paperswithcode.com/method/resnet) backbone where usually [SGD](https://paperswithcode.com/method/sgd-with-momentum) or [LARS](https://paperswithcode.com/method/lars) $[4,8,19]$ is used. The authors also use a regularization method of asymmetric [drop path](https://paperswithcode.com/method/droppath) which proves important for the final performance. In the experiments, the authors adopt a fixed learning rate of $0.001$ and a fixed weight decay of $0.05$, which performs stably well. Hyper-parameters are tuned of the key queue size $K$, the starting momentum value of the target branch, the temperature $\tau$, and the drop path rates.
Given the following machine learning model name: GPT-2, provide a description of the model
**GPT-2** is a [Transformer](https://paperswithcode.com/methods/category/transformers) architecture that was notable for its size (1.5 billion parameters) on its release. The model is pretrained on a WebText dataset - text from 45 million website links. It largely follows the previous [GPT](https://paperswithcode.com/method/gpt) architecture with some modifications: - [Layer normalization](https://paperswithcode.com/method/layer-normalization) is moved to the input of each sub-block, similar to a pre-activation residual network and an additional layer normalization was added after the final self-attention block. - A modified initialization which accounts for the accumulation on the residual path with model depth is used. Weights of residual layers are scaled at initialization by a factor of $1/\sqrt{N}$ where $N$ is the number of residual layers. - The vocabulary is expanded to 50,257. The context size is expanded from 512 to 1024 tokens and a larger batch size of 512 is used.
Given the following machine learning model name: Cosine Annealing, provide a description of the model
**Cosine Annealing** is a type of learning rate schedule that has the effect of starting with a large learning rate that is relatively rapidly decreased to a minimum value before being increased rapidly again. The resetting of the learning rate acts like a simulated restart of the learning process and the re-use of good weights as the starting point of the restart is referred to as a "warm restart" in contrast to a "cold restart" where a new set of small random numbers may be used as a starting point. $$\eta\_{t} = \eta\_{min}^{i} + \frac{1}{2}\left(\eta\_{max}^{i}-\eta\_{min}^{i}\right)\left(1+\cos\left(\frac{T\_{cur}}{T\_{i}}\pi\right)\right) $$ Where where $\eta\_{min}^{i}$ and $ \eta\_{max}^{i}$ are ranges for the learning rate, and $T\_{cur}$ account for how many epochs have been performed since the last restart. Text Source: [Jason Brownlee](https://machinelearningmastery.com/snapshot-ensemble-deep-learning-neural-network/) Image Source: [Gao Huang](https://www.researchgate.net/figure/Training-loss-of-100-layer-DenseNet-on-CIFAR10-using-standard-learning-rate-blue-and-M_fig2_315765130)
Given the following machine learning model name: ParaNet, provide a description of the model
**ParaNet** is a non-autoregressive attention-based architecture for text-to-speech, which is fully convolutional and converts text to mel spectrogram. ParaNet distills the attention from the autoregressive text-to-spectrogram model, and iteratively refines the alignment between text and spectrogram in a layer-by-layer manner. The architecture is otherwise similar to [Deep Voice 3](https://paperswithcode.com/method/deep-voice-3) except these changes to the decoder; whereas the decoder of DV3 has multiple attention-based layers, where each layer consists of a [causal convolution](https://paperswithcode.com/method/causal-convolution) block followed by an attention block, ParaNet has a single attention block in the encoder.
Given the following machine learning model name: Expected Sarsa, provide a description of the model
**Expected Sarsa** is like [Q-learning](https://paperswithcode.com/method/q-learning) but instead of taking the maximum over next state-action pairs, we use the expected value, taking into account how likely each action is under the current policy. $$Q\left(S\_{t}, A\_{t}\right) \leftarrow Q\left(S\_{t}, A\_{t}\right) + \alpha\left[R_{t+1} + \gamma\sum\_{a}\pi\left(a\mid{S\_{t+1}}\right)Q\left(S\_{t+1}, a\right) - Q\left(S\_{t}, A\_{t}\right)\right] $$ Except for this change to the update rule, the algorithm otherwise follows the scheme of Q-learning. It is more computationally expensive than [Sarsa](https://paperswithcode.com/method/sarsa) but it eliminates the variance due to the random selection of $A\_{t+1}$. Source: Sutton and Barto, Reinforcement Learning, 2nd Edition
Given the following machine learning model name: UNIMO, provide a description of the model
**UNIMO** is a multi-modal pre-training architecture that can effectively adapt to both single modal and multimodal understanding and generation tasks. UNIMO learns visual representations and textual representations simultaneously, and unifies them into the same semantic space via [cross-modal contrastive learning](https://paperswithcode.com/method/cmcl) (CMCL) based on a large-scale corpus of image collections, text corpus and image-text pairs. The CMCL aligns the visual representation and textual representation, and unifies them into the same semantic space based on image-text pairs.
Given the following machine learning model name: Relational Graph Convolution Network, provide a description of the model
An **RGCN**, or **Relational Graph Convolution Network**, is a an application of the [GCN framework](https://paperswithcode.com/method/gcn) to modeling relational data, specifically to link prediction and entity classification tasks. See [here](https://docs.dgl.ai/en/0.4.x/tutorials/models/1_gnn/4_rgcn.html) for an in-depth explanation of RGCNs by DGL.
Given the following machine learning model name: K-Maximal Word Allocation, provide a description of the model
Given the following machine learning model name: Stochastic Gradient Variational Bayes, provide a description of the model
Given the following machine learning model name: Overfitting Conditional Diffusion Model, provide a description of the model
Given the following machine learning model name: VGG-16, provide a description of the model
Given the following machine learning model name: ReGLU, provide a description of the model
**ReGLU** is an activation function which is a variant of [GLU](https://paperswithcode.com/method/glu). The definition is as follows: $$ \text{ReGLU}\left(x, W, V, b, c\right) = \max\left(0, xW + b\right) \otimes \left(xV + c\right) $$
Given the following machine learning model name: Metric Pairwise Constrained KMeans, provide a description of the model
Original paper : Integrating Constraints and Metric Learning in Semi-Supervised Clustering, Bilenko et al. 2004
Given the following machine learning model name: IFBlock, provide a description of the model
**IFBlock** is a video model block used in the [IFNet](https://paperswithcode.com/method/ifnet) architecture for video frame interpolation. IFBlocks do not contain expensive operators like cost volume or forward warping and use 3 × 3 convolution and deconvolution as building blocks. Each IFBlock has a feed-forward structure consisting of several convolutional layers and an upsampling operator. Except for the layer that outputs the optical flow residuals and the fusion map, [PReLU](https://paperswithcode.com/method/prelu) activations are used.
Given the following machine learning model name: PrivacyNet, provide a description of the model
**PrivacyNet** is a [GAN](https://paperswithcode.com/method/gan)-based semi-adversarial network (SAN) that modifies an input face image such that it can be used by a face matcher for matching purposes but cannot be reliably used by an attribute classifier. PrivacyNet allows a person to choose specific attributes that have to be obfuscated in the input face images (e.g., age and race), while allowing for other types of attributes to be extracted (e.g., gender).
Given the following machine learning model name: SongNet, provide a description of the model
**SongNet** is an auto-regressive [Transformer](https://paperswithcode.com/method/transformer)-based language model for rigid format text detection. Sets of symbols are tailor-designed to improve the modeling performance especially on format, rhyme, and sentence integrity. The attention mechanism is improved to impel the model to capture some future information on the format. A pre-training and fine-tuning framework is designed to further improve the generation quality.
Given the following machine learning model name: Colorization, provide a description of the model
**Colorization** is a self-supervision approach that relies on colorization as the pretext task in order to learn image representations.
Given the following machine learning model name: Epsilon Greedy Exploration, provide a description of the model
**$\epsilon$-Greedy Exploration** is an exploration strategy in reinforcement learning that takes an exploratory action with probability $\epsilon$ and a greedy action with probability $1-\epsilon$. It tackles the exploration-exploitation tradeoff with reinforcement learning algorithms: the desire to explore the state space with the desire to seek an optimal policy. Despite its simplicity, it is still commonly used as an behaviour policy $\pi$ in several state-of-the-art reinforcement learning models. Image Credit: [Robin van Embden](https://cran.r-project.org/web/packages/contextual/vignettes/sutton_barto.html)
Given the following machine learning model name: RetinaMask, provide a description of the model
**RetinaMask** is a one-stage object detection method that improves upon [RetinaNet](https://paperswithcode.com/method/retinanet) by adding the task of instance mask prediction during training, as well as an [adaptive loss](https://paperswithcode.com/method/adaptive-loss) that improves robustness to parameter choice during training, and including more difficult examples in training.
Given the following machine learning model name: Spatial & Temporal Attention, provide a description of the model
Spatial & temporal attention combines the advantages of spatial attention and temporal attention as it adaptively selects both important regions and key frames. Some works compute temporal attention and spatial attention separately, while others produce joint spatio & temporal attention maps. Further works focusing on capturing pairwise relations.
Given the following machine learning model name: SegNet, provide a description of the model
**SegNet** is a semantic segmentation model. This core trainable segmentation architecture consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature maps. Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling.
Given the following machine learning model name: PGC-DGCNN, provide a description of the model
PGC-DGCNN provides a new definition of graph convolutional filter. It generalizes the most commonly adopted filter, adding an hyper-parameter controlling the distance of the considered neighborhood. The model extends graph convolutions, following an intuition derived from the well-known convolutional filters over multi-dimensional tensors. The methods involves a simple, efficient and effective way to introduce a hyper-parameter on graph convolutions that influences the filter size, i.e. its receptive field over the considered graph. Description and image from: [On Filter Size in Graph Convolutional Networks](https://arxiv.org/pdf/1811.10435.pdf)
Given the following machine learning model name: Elastic Weight Consolidation, provide a description of the model
The methon to overcome catastrophic forgetting in neural network while continual learning
Given the following machine learning model name: Multiscale Attention ViT with Late fusion, provide a description of the model
Multiscale Attention ViT with Late fusion (MAVL) is a multi-modal network, trained with aligned image-text pairs, capable of performing targeted detection using human understandable natural language text queries. It utilizes multi-scale image features and uses deformable convolutions with late multi-modal fusion. The authors demonstrate excellent ability of MAVL as class-agnostic object detector when queried using general human understandable natural language command, such as "all objects", "all entities", etc.
Given the following machine learning model name: None, provide a description of the model
Given the following machine learning model name: Soft-NMS, provide a description of the model
Non-maximum suppression is an integral part of the object detection pipeline. First, it sorts all detection boxes on the basis of their scores. The detection box $M$ with the maximum score is selected and all other detection boxes with a significant overlap (using a pre-defined threshold) with $M$ are suppressed. This process is recursively applied on the remaining boxes. As per the design of the algorithm, if an object lies within the predefined overlap threshold, it leads to a miss. **Soft-NMS** solves this problem by decaying the detection scores of all other objects as a continuous function of their overlap with M. Hence, no object is eliminated in this process.
Given the following machine learning model name: AltDiffusion, provide a description of the model
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at https://github.com/FlagAI-Open/FlagAI.
Given the following machine learning model name: MobileViT, provide a description of the model
MobileViT is a vision transformer that is tuned to mobile phone
Given the following machine learning model name: Sequential Information Threading, provide a description of the model
Unsupervised machine learning approach for identifying information threads by leveraging answers to 5W1H questions from documents, the temporal relationships between documents and hierarchical agglomerative clustering (HAC).
Given the following machine learning model name: Smooth ReLU, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Selective Search, provide a description of the model
**Selective Search** is a region proposal algorithm for object detection tasks. It starts by over-segmenting the image based on intensity of the pixels using a graph-based segmentation method by Felzenszwalb and Huttenlocher. Selective Search then takes these oversegments as initial input and performs the following steps 1. Add all bounding boxes corresponding to segmented parts to the list of regional proposals 2. Group adjacent segments based on similarity 3. Go to step 1 At each iteration, larger segments are formed and added to the list of region proposals. Hence we create region proposals from smaller segments to larger segments in a bottom-up approach. This is what we mean by computing “hierarchical” segmentations using Felzenszwalb and Huttenlocher’s oversegments.