prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Span-Based Dynamic Convolution, provide a description of the model
**Span-Based Dynamic Convolution** is a type of convolution used in the [ConvBERT](https://paperswithcode.com/method/convbert) architecture to capture local dependencies between tokens. Kernels are generated by taking in a local span of current token, which better utilizes local dependency and discriminates different meanings of the same token (e.g., if “a” is in front of “can” in the input sentence, “can” is apparently a noun not a verb). Specifically, with [classic convolution](https://paperswithcode.com/method/convolution), we would have fixed parameters shared for all input tokens. [Dynamic convolution](https://paperswithcode.com/method/dynamicconv) is therefore preferable because it has higher flexibility in capturing local dependencies of different tokens. Dynamic convolution uses a kernel generator to produce different kernels for different input tokens. However, such dynamic convolution cannot differentiate the same tokens within different context and generate the same kernels (e.g., the three “can” in Figure (b)). Therefore the span-based dynamic convolution is developed to produce more adaptive convolution kernels by receiving an input span instead of only a single token, which enables discrimination of generated kernels for the same tokens within different context. For example, as shown in Figure (c), span-based dynamic convolution produces different kernels for different “can” tokens.
Given the following machine learning model name: Entropy Regularization, provide a description of the model
**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-policy policy gradient based methods like [A3C](https://paperswithcode.com/method/a3c), the same mutual reinforcement behaviour leads to a highly-peaked $\pi\left(a\mid{s}\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity: $$H(X) = -\sum\pi\left(x\right)\log\left(\pi\left(x\right)\right) $$ Image Credit: Wikipedia
Given the following machine learning model name: Proxy Anchor Loss for Deep Metric Learning, provide a description of the model
Given the following machine learning model name: Learnable Extended Activation Function, provide a description of the model
Given the following machine learning model name: RegNetX, provide a description of the model
**RegNetX** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear parameterisation of block widths (the design space only contains models with this linear structure): $$ u\_{j} = w\_{0} + w\_{a}\cdot{j} $$ For **RegNetX** we have additional restrictions: we set $b = 1$ (the bottleneck ratio), $12 \leq d \leq 28$, and $w\_{m} \geq 2$ (the width multiplier).
Given the following machine learning model name: Improved Gravitational Search algorithm, provide a description of the model
Metaheuristic algorithm
Given the following machine learning model name: Compressed Memory, provide a description of the model
**Compressed Memory** is a secondary FIFO memory component proposed as part of the [Compressive Transformer](https://paperswithcode.com/method/compressive-transformer) model. The Compressive [Transformer](https://paperswithcode.com/method/transformer) keeps a fine-grained memory of past activations, which are then compressed into coarser compressed memories. For choices of compression functions $f\_{c}$ the authors consider (1) max/mean pooling, where the kernel and stride is set to the compression rate $c$; (2) 1D [convolution](https://paperswithcode.com/method/convolution) also with kernel & stride set to $c$; (3) dilated convolutions; (4) *most-used* where the memories are sorted by their average attention (usage) and the most-used are preserved.
Given the following machine learning model name: Large-scale Information Network Embedding, provide a description of the model
LINE is a novel network embedding method which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. Source: [Tang et al.](https://arxiv.org/pdf/1503.03578v1.pdf) Image source: [Tang et al.](https://arxiv.org/pdf/1503.03578v1.pdf)
Given the following machine learning model name: Graph Convolutional Network, provide a description of the model
A **Graph Convolutional Network**, or **GCN**, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of [convolutional neural networks](https://paperswithcode.com/methods/category/convolutional-neural-networks) which operate directly on graphs. The choice of convolutional architecture is motivated via a localized first-order approximation of spectral graph convolutions. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes.
Given the following machine learning model name: MobileNetV3, provide a description of the model
**MobileNetV3** is a convolutional neural network that is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the [NetAdapt](https://paperswithcode.com/method/netadapt) algorithm, and then subsequently improved through novel architecture advances. Advances include (1) complementary search techniques, (2) new efficient versions of nonlinearities practical for the mobile setting, (3) new efficient network design. The network design includes the use of a [hard swish](https://paperswithcode.com/method/hard-swish) activation and squeeze-and-excitation modules in the MBConv blocks.
Given the following machine learning model name: QuantTree histograms, provide a description of the model
Given a training set drawn from an unknown $d$-variate probability distribution, QuantTree constructs a histogram by recursively splitting $\mathbb{R}^d$. The splits are defined by a stochastic process so that each bin contains a certain proportion of the training set. These histograms can be used to define test statistics (e.g., the Pearson statistic) to tell whether a batch of data is drawn from $\phi_0$ or not. The most crucial property of QuantTree is that the distribution of any statistic based on QuantTree histograms is independent of $\phi_0$, thus enabling nonparametric statistical testing.
Given the following machine learning model name: Spatial Transformer, provide a description of the model
A **Spatial Transformer** is an image model block that explicitly allows the spatial manipulation of data within a [convolutional neural network](https://paperswithcode.com/methods/category/convolutional-neural-networks). It gives CNNs the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. Unlike pooling layers, where the receptive fields are fixed and local, the spatial transformer module is a dynamic mechanism that can actively spatially transform an image (or a feature map) by producing an appropriate transformation for each input sample. The transformation is then performed on the entire feature map (non-locally) and can include scaling, cropping, rotations, as well as non-rigid deformations. The architecture is shown in the Figure to the right. The input feature map $U$ is passed to a localisation network which regresses the transformation parameters $\theta$. The regular spatial grid $G$ over $V$ is transformed to the sampling grid $T\_{\theta}\left(G\right)$, which is applied to $U$, producing the warped output feature map $V$. The combination of the localisation network and sampling mechanism defines a spatial transformer.
Given the following machine learning model name: Mask R-CNN, provide a description of the model
**Mask R-CNN** extends [Faster R-CNN](http://paperswithcode.com/method/faster-r-cnn) to solve instance segmentation tasks. It achieves this by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. In principle, Mask R-CNN is an intuitive extension of Faster [R-CNN](https://paperswithcode.com/method/r-cnn), but constructing the mask branch properly is critical for good results. Most importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is evident in how [RoIPool](http://paperswithcode.com/method/roi-pooling), the *de facto* core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, Mask R-CNN utilises a simple, quantization-free layer, called [RoIAlign](http://paperswithcode.com/method/roi-align), that faithfully preserves exact spatial locations. Secondly, Mask R-CNN *decouples* mask and class prediction: it predicts a binary mask for each class independently, without competition among classes, and relies on the network's RoI classification branch to predict the category. In contrast, an [FCN](http://paperswithcode.com/method/fcn) usually perform per-pixel multi-class categorization, which couples segmentation and classification.
Given the following machine learning model name: Weight Decay, provide a description of the model
**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\_{2}$ Norm of the weights: $$L\_{new}\left(w\right) = L\_{original}\left(w\right) + \lambda{w^{T}w}$$ where $\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). Weight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function). Image Source: Deep Learning, Goodfellow et al
Given the following machine learning model name: Deeper Atrous Spatial Pyramid Pooling, provide a description of the model
DASPP is a deeper version of the [ASPP](https://paperswithcode.com/method/aspp) module (the latter from [DeepLabv3](https://paperswithcode.com/method/deeplabv3)) that adds standard 3 × 3 [convolution](https://paperswithcode.com/method/convolution) after 3 × 3 dilated convolutions to refine the features and also fusing the input and the output of the DASPP module via short [residual connection](https://paperswithcode.com/method/residual-connection). Also, the number of convolution filters of ASPP is reduced from 255 to 96 to gain computational performance.
Given the following machine learning model name: Hopfield Layer, provide a description of the model
A **Hopfield Layer** is a module that enables a network to associate two sets of vectors. This general functionality allows for [transformer](https://paperswithcode.com/method/transformer)-like self-attention, for decoder-encoder attention, for time series prediction (maybe with positional encoding), for sequence analysis, for multiple instance learning, for learning with point sets, for combining data sources by associations, for constructing a memory, for averaging and pooling operations, and for many more. In particular, the Hopfield layer can readily be used as plug-in replacement for existing layers like pooling layers ([max-pooling](https://paperswithcode.com/method/max-pooling) or [average pooling](https://paperswithcode.com/method/average-pooling), permutation equivariant layers, [GRU](https://paperswithcode.com/method/gru) & [LSTM](https://paperswithcode.com/method/lstm) layers, and attention layers. The Hopfield layer is based on modern Hopfield networks with continuous states that have very high storage capacity and converge after one update.
Given the following machine learning model name: Anti-Alias Downsampling, provide a description of the model
**Anti-Alias Downsampling (AA)** aims to improve the shift-equivariance of deep networks. Max-pooling is inherently composed of two operations. The first operation is to densely evaluate the max operator and second operation is naive subsampling. AA is proposed as a low-pass filter between them to achieve practical anti-aliasing in any existing strided layer such as strided [convolution](https://paperswithcode.com/method/convolution). The smoothing factor can be adjusted by changing the blur kernel filter size, where a larger filter size results in increased blur.
Given the following machine learning model name: GBlock, provide a description of the model
**GBlock** is a type of [residual block](https://paperswithcode.com/method/residual-block) used in the [GAN-TTS](https://paperswithcode.com/method/gan-tts) text-to-speech architecture - it is a stack of two residual blocks. As the generator is producing raw audio (e.g. a 2s training clip corresponds to a sequence of 48000 samples), dilated convolutions are used to ensure that the receptive field of $G$ is large enough to capture long-term dependencies. The four kernel size-3 convolutions in each GBlock have increasing dilation factors: 1, 2, 4, 8. Convolutions are preceded by Conditional Batch Normalisation, conditioned on the linear embeddings of the noise term $z \sim N\left(0, \mathbf{I}\_{128}\right)$ in the single-speaker case, or the concatenation of $z$ and a one-hot representation of the speaker ID in the multi-speaker case. The embeddings are different for each BatchNorm instance. A GBlock contains two skip connections, the first of which in [GAN](https://paperswithcode.com/method/gan)-TTS performs upsampling if the output frequency is higher than the input, and it also contains a size-1 [convolution](https://paperswithcode.com/method/convolution) if the number of output channels is different from the input.
Given the following machine learning model name: Distributed Any-Batch Mirror Descent, provide a description of the model
**Distributed Any-Batch Mirror Descent** (DABMD) is based on distributed Mirror Descent but uses a fixed per-round computing time to limit the waiting by fast nodes to receive information updates from slow nodes. DABMD is characterized by varying minibatch sizes across nodes. It is applicable to a broader range of problems compared with existing distributed online optimization methods such as those based on dual averaging, and it accommodates time-varying network topology.
Given the following machine learning model name: IoU-guided NMS, provide a description of the model
**IoU-guided NMS** is a type of non-maximum suppression that help to eliminate the suppression failure caused by the misleading classification confidences. This is achieved through using the predicted IoU instead of the classification confidence as the ranking keyword for bounding boxes.
Given the following machine learning model name: Scale-wise Feature Aggregation Module, provide a description of the model
**SFAM**, or **Scale-wise Feature Aggregation Module**, is a feature extraction block from the [M2Det](https://paperswithcode.com/method/m2det) architecture. It aims to aggregate the multi-level multi-scale features generated by [Thinned U-Shaped Modules](https://paperswithcode.com/method/tum) into a multi-level feature pyramid. The first stage of SFAM is to concatenate features of the equivalent scale together along the channel dimension. The aggregated feature pyramid can be presented as $\mathbf{X} =[\mathbf{X}\_1,\mathbf{X}\_2,\dots,\mathbf{X}\_i]$, where $\mathbf{X}\_i = \text{Concat}(\mathbf{x}\_i^1,\mathbf{x}\_i^2,\dots,\mathbf{x}\_i^L) \in \mathbb{R}^{W\_{i}\times H\_{i}\times C}$ refers to the features of the $i$-th largest scale. Here, each scale in the aggregated pyramid contains features from multi-level depths. However, simple concatenation operations are not adaptive enough. In the second stage, we introduce a channel-wise attention module to encourage features to focus on channels that they benefit most. Following Squeeze-and-Excitation, we use [global average pooling](https://paperswithcode.com/method/global-average-pooling) to generate channel-wise statistics $\mathbf{z} \in \mathbb{R}^C$ at the squeeze step. And to fully capture channel-wise dependencies, the following excitation step learns the attention mechanism via two fully connected layers: $$ \mathbf{s} = \mathbf{F}\_{ex}(\mathbf{z},\mathbf{W}) = \sigma(\mathbf{W}\_{2} \delta(\mathbf{W}\_{1}\mathbf{z})), $$ where $\sigma$ refers to the [ReLU](https://paperswithcode.com/method/relu) function, $\delta$ refers to the sigmoid function, $\mathbf{W}\_{1} \in \mathbb{R}^{\frac{C}{r}\times C}$ , $\mathbf{W}\_{2} \in \mathbb{R}^{C\times \frac{C}{r}}$, r is the reduction ratio ($r=16$ in our experiments). The final output is obtained by reweighting the input $\mathbf{X}$ with activation $\mathbf{s}$: $$ \tilde{\mathbf{X}}_i^c = \mathbf{F}\_{scale}(\mathbf{X}\_i^c,s_c) = s_c \cdot \mathbf{X}_i^c, $$ where $\tilde{\mathbf{X}\_i} = [\tilde{\mathbf{X}}\_i^1,\tilde{\mathbf{X}}\_i^2,...,\tilde{\mathbf{X}}\_i^C]$, each of the features is enhanced or weakened by the rescaling operation.
Given the following machine learning model name: Genetic Algorithms, provide a description of the model
Genetic Algorithms are search algorithms that mimic Darwinian biological evolution in order to select and propagate better solutions.
Given the following machine learning model name: Segmentation of patchy areas in biomedical images based on local edge density estimation, provide a description of the model
An effective approach to the quantification of patchiness in biomedical images according to their local edge densities.
Given the following machine learning model name: ClassSR, provide a description of the model
**ClassSR** is a framework to accelerate super-resolution (SR) networks on large images (2K-8K). ClassSR combines classification and SR in a unified framework. In particular, it first uses a Class-Module to classify the sub-images into different classes according to restoration difficulties, then applies an SR-Module to perform SR for different classes. The Class-Module is a conventional classification network, while the SR-Module is a network container that consists of the to-be-accelerated SR network and its simplified versions.
Given the following machine learning model name: Q-Learning, provide a description of the model
**Q-Learning** is an off-policy temporal difference control algorithm: $$Q\left(S\_{t}, A\_{t}\right) \leftarrow Q\left(S\_{t}, A\_{t}\right) + \alpha\left[R_{t+1} + \gamma\max\_{a}Q\left(S\_{t+1}, a\right) - Q\left(S\_{t}, A\_{t}\right)\right] $$ The learned action-value function $Q$ directly approximates $q\_{*}$, the optimal action-value function, independent of the policy being followed. Source: Sutton and Barto, Reinforcement Learning, 2nd Edition
Given the following machine learning model name: Dynamic Convolution, provide a description of the model
The extremely low computational cost of lightweight CNNs constrains the depth and width of the networks, further decreasing their representational power. To address the above problem, Chen et al. proposed dynamic convolution, a novel operator design that increases representational power with negligible additional computational cost and does not change the width or depth of the network in parallel with CondConv. Dynamic convolution uses $K$ parallel convolution kernels of the same size and input/output dimensions instead of one kernel per layer. Like SE blocks, it adopts a squeeze-and-excitation mechanism to generate the attention weights for the different convolution kernels. These kernels are then aggregated dynamically by weighted summation and applied to the input feature map $X$: \begin{align} s & = \text{softmax} (W_{2} \delta (W_{1}\text{GAP}(X))) \end{align} \begin{align} \text{DyConv} &= \sum_{i=1}^{K} s_k \text{Conv}_k \end{align} \begin{align} Y &= \text{DyConv}(X) \end{align} Here the convolutions are combined by summation of weights and biases of convolutional kernels. Compared to applying convolution to the feature map, the computational cost of squeeze-and-excitation and weighted summation is extremely low. Dynamic convolution thus provides an efficient operation to improve representational power and can be easily used as a replacement for any convolution.
Given the following machine learning model name: GPipe, provide a description of the model
**GPipe** is a distributed model parallel method for neural networks. With GPipe, each model can be specified as a sequence of layers, and consecutive groups of layers can be partitioned into cells. Each cell is then placed on a separate accelerator. Based on this partitioned setup, batch splitting is applied. A mini-batch of training examples is split into smaller micro-batches, then the execution of each set of micro-batches is pipelined over cells. Synchronous mini-batch gradient descent is applied for training, where gradients are accumulated across all micro-batches in a mini-batch and applied at the end of a mini-batch.
Given the following machine learning model name: Ape-X, provide a description of the model
**Ape-X** is a distributed architecture for deep reinforcement learning. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared [experience replay](https://paperswithcode.com/method/experience-replay) memory; the learner replays samples of experience and updates the neural network. The architecture relies on [prioritized experience replay](https://paperswithcode.com/method/prioritized-experience-replay) to focus only on the most significant data generated by the actors. In contrast to Gorila, Ape-X uses a shared, centralized replay memory, and instead of sampling uniformly, it prioritizes, to sample the most useful data more often. All communications are batched with the centralized replay, increasing the efficiency and throughput at the cost of some latency. And by learning off-policy, Ape-X has the ability to combine data from many distributed actors, by giving the different actors different exploration policies, broadening the diversity of the experience they jointly encounter.
Given the following machine learning model name: Switch FFN, provide a description of the model
A **Switch FFN** is a sparse layer that operates independently on tokens within an input sequence. It is shown in the blue block in the figure. We diagram two tokens ($x\_{1}$ = “More” and $x\_{2}$ = “Parameters” below) being routed (solid lines) across four FFN experts, where the router independently routes each token. The switch FFN layer returns the output of the selected FFN multiplied by the router gate value (dotted-line).
Given the following machine learning model name: Negative Face Recognition, provide a description of the model
**Negative Face Recognition**, or **NFR**, is a face recognition approach that enhances the soft-biometric privacy on the template-level by representing face templates in a complementary (negative) domain. While ordinary templates characterize facial properties of an individual, negative templates describe facial properties that does not exist for this individual. This suppresses privacy-sensitive information from stored templates. Experiments are conducted on two publicly available datasets captured under controlled and uncontrolled scenarios on three privacy-sensitive attributes.
Given the following machine learning model name: High-resolution Deep Convolutional Generative Adversarial Networks, provide a description of the model
**HDCGAN**, or **High-resolution Deep Convolutional Generative Adversarial Networks**, is a [DCGAN](https://paperswithcode.com/method/dcgan) based architecture that achieves high-resolution image generation through the proper use of [SELU](https://paperswithcode.com/method/selu) activations. Glasses, a mechanism to arbitrarily improve the final [GAN](https://paperswithcode.com/method/gan) generated results by enlarging the input size by a telescope ζ is also set forth. A video showing the training procedure on CelebA-hq can be found [here](https://youtu.be/1XZB87W0SaY).
Given the following machine learning model name: Scaled Exponential Linear Unit, provide a description of the model
**Scaled Exponential Linear Units**, or **SELUs**, are activation functions that induce self-normalizing properties. The SELU activation function is given by $$f\left(x\right) = \lambda{x} \text{ if } x \geq{0}$$ $$f\left(x\right) = \lambda{\alpha\left(\exp\left(x\right) -1 \right)} \text{ if } x < 0 $$ with $\alpha \approx 1.6733$ and $\lambda \approx 1.0507$.
Given the following machine learning model name: MobileNetV2, provide a description of the model
**MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an inverted residual structure where the residual connections are between the bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. As a whole, the architecture of MobileNetV2 contains the initial fully [convolution](https://paperswithcode.com/method/convolution) layer with 32 filters, followed by 19 residual bottleneck layers.
Given the following machine learning model name: U-Net, provide a description of the model
**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. [Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)
Given the following machine learning model name: Sigmoid Activation, provide a description of the model
**Sigmoid Activations** are a type of activation function for neural networks: $$f\left(x\right) = \frac{1}{\left(1+\exp\left(-x\right)\right)}$$ Some drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.
Given the following machine learning model name: ResNeSt, provide a description of the model
A **ResNest** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks Split-Attention blocks. The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V}^{K}$}. As in standard residual blocks, the final output $Y$ of otheur Split-Attention block is produced using a shortcut connection: $Y=V+X$, if the input and output feature-map share the same shape. For blocks with a stride, an appropriate transformation $\mathcal{T}$ is applied to the shortcut connection to align the output shapes: $Y=V+\mathcal{T}(X)$. For example, $\mathcal{T}$ can be strided [convolution](https://paperswithcode.com/method/convolution) or combined convolution-with-pooling.
Given the following machine learning model name: Hierarchical-Split Block, provide a description of the model
**Hierarchical-Split Block** is a representational block for multi-scale feature representations. It contains many hierarchical split and concatenate connections within one single [residual block](https://paperswithcode.com/methods/category/skip-connection-blocks). Specifically, ordinary feature maps in deep neural networks are split into $s$ groups, each with $w$ channels. As shown in the Figure, only the first group of filters can be straightly connected to next layer. The second group of feature maps are sent to a convolution of $3 \times 3$ filters to extract features firstly, then the output feature maps are split into two sub-groups in the channel dimension. One sub-group of feature maps straightly connected to next layer, while the other sub-group is concatenated with the next group of input feature maps in the channel dimension. The concatenated feature maps are operated by a set of $3 \times 3$ convolutional filters. This process repeats several times until the rest of input feature maps are processed. Finally, features maps from all input groups are concatenated and sent to another layer of $1 \times 1$ filters to rebuild the features.
Given the following machine learning model name: MaxUp, provide a description of the model
**MaxUp** is an adversarial data augmentation technique for improving the generalization performance of machine learning models. The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data. By doing so, we implicitly introduce a smoothness or robustness regularization against the random perturbations, and hence improve the generation performance. For example, in the case of Gaussian perturbation, MaxUp is asymptotically equivalent to using the gradient norm of the loss as a penalty to encourage smoothness.
Given the following machine learning model name: AltCLIP, provide a description of the model
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at https://github.com/FlagAI-Open/FlagAI.
Given the following machine learning model name: Linear Discriminant Analysis, provide a description of the model
**Linear discriminant analysis** (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification. Extracted from [Wikipedia](https://en.wikipedia.org/wiki/Linear_discriminant_analysis) **Source**: Paper: [Linear Discriminant Analysis: A Detailed Tutorial](https://dx.doi.org/10.3233/AIC-170729) Public version: [Linear Discriminant Analysis: A Detailed Tutorial](https://usir.salford.ac.uk/id/eprint/52074/)
Given the following machine learning model name: DeepWalk, provide a description of the model
**DeepWalk** learns embeddings (social representations) of a graph's vertices, by modeling a stream of short random walks. Social representations are latent features of the vertices that capture neighborhood similarity and community membership. These latent representations encode social relations in a continuous vector space with a relatively small number of dimensions. It generalizes neural language models to process a special language composed of a set of randomly-generated walks. The goal is to learn a latent representation, not only a probability distribution of node co-occurrences, and so as to introduce a mapping function $\Phi \colon v \in V \mapsto \mathbb{R}^{|V|\times d}$. This mapping $\Phi$ represents the latent social representation associated with each vertex $v$ in the graph. In practice, $\Phi$ is represented by a $|V| \times d$ matrix of free parameters.
Given the following machine learning model name: Channel-wise Cross Fusion Transformer, provide a description of the model
**Channel-wise Cross Fusion Transformer** is a module used in the [UCTransNet](https://paperswithcode.com/method/uctransnet) architecture for semantic segmentation. It fuses the multi-scale encoder features with the advantage of the long dependency modeling in the [Transformer](https://paperswithcode.com/method/transformer). The [CCT](https://paperswithcode.com/method/cct) module consists of three steps: multi-scale feature embedding, multi-head [channel-wise cross attention](https://paperswithcode.com/method/channel-wise-cross-attention) and Multi-Layer Perceptron (MLP).
Given the following machine learning model name: Hierarchical Transferability Calibration Network, provide a description of the model
**Hierarchical Transferability Calibration Network** (HTCN) is an adaptive object detector that hierarchically (local-region/image/instance) calibrates the transferability of feature representations for harmonizing transferability and discriminability. The proposed model consists of three components: (1) Importance Weighted Adversarial Training with input Interpolation (IWAT-I), which strengthens the global discriminability by re-weighting the interpolated image-level features; (2) Context-aware Instance-Level Alignment (CILA) module, which enhances the local discriminability by capturing the complementary effect between the instance-level feature and the global context information for the instance-level feature alignment; (3) local feature masks that calibrate the local transferability to provide semantic guidance for the following discriminative pattern alignment.
Given the following machine learning model name: 3 Dimensional Soft Attention, provide a description of the model
Given the following machine learning model name: MobileBERT, provide a description of the model
**MobileBERT** is a type of inverted-bottleneck [BERT](https://paperswithcode.com/method/bert) that compresses and accelerates the popular BERT model. MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. It is trained by layer-to-layer imitating the inverted bottleneck BERT.
Given the following machine learning model name: Ape-X DPG, provide a description of the model
**Ape-X DPG** combines [DDPG](https://paperswithcode.com/method/ddpg) with distributed [prioritized experience replay](https://paperswithcode.com/method/prioritized-experience-replay) through the [Ape-X](https://paperswithcode.com/method/ape-x) architecture.
Given the following machine learning model name: Attention Dropout, provide a description of the model
**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term: $$ {\text{Attention}}(Q, K, V) = \text{softmax}\left(\frac{QK^{T}}{\sqrt{d_k}}\right)V $$
Given the following machine learning model name: PyramidNet, provide a description of the model
A **PyramidNet** is a type of convolutional network where the key idea is to concentrate on the feature map dimension by increasing it gradually instead of by increasing it sharply at each residual unit with downsampling. In addition, the network architecture works as a mixture of both plain and residual networks by using zero-padded identity-mapping shortcut connections when increasing the feature map dimension.
Given the following machine learning model name: Cosine Power Annealing, provide a description of the model
Interpolation between [exponential decay](https://paperswithcode.com/method/exponential-decay) and [cosine annealing](https://paperswithcode.com/method/cosine-annealing).
Given the following machine learning model name: Self-Supervised Cross View Cross Subject Pose Contrastive Learning, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Curvature Regularized Variational Auto-Encoder, provide a description of the model
Given the following machine learning model name: Off-Diagonal Orthogonal Regularization, provide a description of the model
**Off-Diagonal Orthogonal Regularization** is a modified form of [orthogonal regularization](https://paperswithcode.com/method/orthogonal-regularization) originally used in [BigGAN](https://paperswithcode.com/method/biggan). The original orthogonal regularization is known to be limiting so the authors explore several variants designed to relax the constraint while still imparting the desired smoothness to the models. They opt for a modification where they remove diagonal terms from the regularization, and aim to minimize the pairwise cosine similarity between filters but does not constrain their norm: $$ R\_{\beta}\left(W\right) = \beta|| W^{T}W \odot \left(\mathbf{1}-I\right) ||^{2}\_{F} $$ where $\mathbf{1}$ denotes a matrix with all elements set to 1. The authors sweep $\beta$ values and select $10^{−4}$.
Given the following machine learning model name: Style Transfer Module, provide a description of the model
Modules used in [GAN](https://paperswithcode.com/method/gan)'s style transfer.
Given the following machine learning model name: LARS, provide a description of the model
**Layer-wise Adaptive Rate Scaling**, or **LARS**, is a large batch optimization technique. There are two notable differences between LARS and other adaptive algorithms such as [Adam](https://paperswithcode.com/method/adam) or [RMSProp](https://paperswithcode.com/method/rmsprop): first, LARS uses a separate learning rate for each layer and not for each weight. And second, the magnitude of the update is controlled with respect to the weight norm for better control of training speed. $$m\_{t} = \beta\_{1}m\_{t-1} + \left(1-\beta\_{1}\right)\left(g\_{t} + \lambda{x\_{t}}\right)$$ $$x\_{t+1}^{\left(i\right)} = x\_{t}^{\left(i\right)} - \eta\_{t}\frac{\phi\left(|| x\_{t}^{\left(i\right)} ||\right)}{|| m\_{t}^{\left(i\right)} || }m\_{t}^{\left(i\right)} $$
Given the following machine learning model name: DAFNe, provide a description of the model
**DAFNe** is a dense one-stage anchor-free deep model for oriented object detection. It is a deep neural network that performs predictions on a dense grid over the input image, being architecturally simpler in design, as well as easier to optimize than its two-stage counterparts. Furthermore, it reduces the prediction complexity by refraining from employing bounding box anchors. This enables a tighter fit to oriented objects, leading to a better separation of bounding boxes especially in case of dense object distributions. Moreover, it introduces an orientation-aware generalization of the center-ness function to arbitrary quadrilaterals that takes into account the object's orientation and that, accordingly, accurately down-weights low-quality predictions
Given the following machine learning model name: MelGAN Residual Block, provide a description of the model
The **MelGAN Residual Block** is a convolutional [residual block](https://paperswithcode.com/method/residual-block) used in the [MelGAN](https://paperswithcode.com/method/melgan) generative audio architecture. It employs residual connections with dilated convolutions. Dilations are used so that temporally far output activations of each subsequent layer has significant overlapping inputs. Receptive field of a stack of [dilated convolution](https://paperswithcode.com/method/dilated-convolution) layers increases exponentially with the number of layers. Incorporating these into the MelGAN generator allows us to efficiently increase the induced receptive fields of each output time-step. This effectively implies larger overlap in the induced receptive field of far apart time-steps, leading to better long range correlation.
Given the following machine learning model name: Instance Colouring Stick-Breaking Process, provide a description of the model
Given the following machine learning model name: BART, provide a description of the model
**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).
Given the following machine learning model name: Informative Sample Mining Network, provide a description of the model
**Informative Sample Mining Network** is a multi-stage sample training scheme for GANs to reduce sample hardness while preserving sample informativeness. Adversarial Importance Weighting is proposed to select informative samples and assign them greater weight. The authors also propose Multi-hop Sample Training to avoid the potential problems in model training caused by sample mining. Based on the principle of divide-and-conquer, the authors produce target images by multiple hops, which means the image translation is decomposed into several separated steps.
Given the following machine learning model name: YellowFin, provide a description of the model
**YellowFin** is a learning rate and momentum tuner motivated by robustness properties and analysis of quadratic objectives. It stems from a known but obscure fact: the momentum operator's spectral radius is constant in a large subset of the hyperparameter space. For quadratic objectives, the optimizer tunes both the learning rate and the momentum to keep the hyperparameters within a region in which the convergence rate is a constant rate equal to the root momentum. This notion is extended empirically to non-convex objectives. On every iteration, YellowFin optimizes the hyperparameters to minimize a local quadratic optimization.
Given the following machine learning model name: AutoML-Zero, provide a description of the model
**AutoML-Zero** is an AutoML technique that aims to search a fine-grained space simultaneously for the model, optimization procedure, initialization, and so on, permitting much less human-design and even allowing the discovery of non-neural network algorithms. It represents ML algorithms as computer programs comprised of three component functions, Setup, Predict, and Learn, that performs initialization, prediction and learning. The instructions in these functions apply basic mathematical operations on a small memory. The operation and memory addresses used by each instruction are free parameters in the search space, as is the size of the component functions. While this reduces expert design, the consequent sparsity means that [random search](https://paperswithcode.com/method/random-search) cannot make enough progress. To overcome this difficulty, the authors use small proxy tasks and migration techniques to build an optimized infrastructure capable of searching through 10,000 models/second/cpu core. Evolutionary methods can find solutions in the AutoML-Zero search space despite its enormous size and sparsity. The authors show that by randomly modifying the programs and periodically selecting the best performing ones on given tasks/datasets, AutoML-Zero discovers reasonable algorithms. They start from empty programs and using data labeled by “teacher” neural networks with random weights, and demonstrate evolution can discover neural networks trained by gradient descent. Following this, they minimize bias toward known algorithms by switching to binary classification tasks extracted from CIFAR-10 and allowing a larger set of possible operations. This discovers interesting techniques like multiplicative interactions, normalized gradient and weight averaging. Finally, they show it is possible for evolution to adapt the algorithm to the type of task provided. For example, [dropout](https://paperswithcode.com/method/dropout)-like operations emerge when the task needs regularization and learning rate decay appears when the task requires faster convergence.
Given the following machine learning model name: weighted finite state transducer, provide a description of the model
Given the following machine learning model name: MnasNet, provide a description of the model
**MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile [neural architecture search](https://paperswithcode.com/method/neural-architecture-search), which explicitly incorporates model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. The main building block is an [inverted residual block](https://paperswithcode.com/method/inverted-residual-block) (from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2)).
Given the following machine learning model name: Knowledge Distillation, provide a description of the model
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel. Source: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)
Given the following machine learning model name: LayoutReader, provide a description of the model
** LayoutReader** is a sequence-to-sequence model for reading order detection that uses both textual and layout information, where the layout-aware language model [LayoutLM](https://paperswithcode.com/method/layoutlmv2) is leveraged as an encoder. The generation step in the encoder-decoder structure tis modified to generate the reading order sequence. In the encoding stage, LayoutReader packs the pair of source and target segments into a contiguous input sequence of LayoutLM and carefully designs the [self-attention mask](https://paperswithcode.com/methods/category/factorized-attention) to control the visibility between tokens. As shown in the Figure, LayoutReader allows the tokens in the source segment to attend to each other while preventing the tokens in the target segment from attending to the rightward context. If 1 means allowing and 0 means preventing, the detail of the mask $M$ is as follows: $$ M\_{i, j}= \begin{cases}1, & \text { if } i<j \text { or } i, j \in \operatorname{src} \\ 0, & \text { otherwise }\end{cases} $$ where $i, j$ are the indices in the packed input sequence, so they may be from source or target segments; $i, j \in$ src means both tokens are from source segment. In the decoding stage, since the source and target are reordered sequences, the prediction candidates can be constrained to the source segment. Therefore, we ask the model to predict the indices in the source sequence. The probability is calculated as follows: $$ \mathcal{P}\left(x_{k}=i \mid x_{<k}\right)=\frac{\exp \left(e_{i}^{T} h\_{k}+b\_{k}\right)}{\sum_{j} \exp \left(e\_{j}^{T} h_{k}+b\_{k}\right)} $$ where $i$ is an index in the source segment; $e\_{i}$ and $e\_{j}$ are the $\mathrm{i}$-th and $\mathrm{j}$-th input embeddings of the source segment; $h\_{k}$ is the hidden states at the $\mathrm{k}$-th time step; $b\_{k}$ is the bias at the $\mathrm{k}$-th time step.
Given the following machine learning model name: Residual Shuffle-Exchange Network, provide a description of the model
**Residual Shuffle-Exchange Network** is an efficient alternative to models using an attention mechanism that allows the modelling of long-range dependencies in sequences in O(n log n) time. This model achieved state-of-the-art performance on the MusicNet dataset for music transcription while being able to run inference on a single GPU fast enough to be suitable for real-time audio processing.
Given the following machine learning model name: Meta Reward Learning, provide a description of the model
**Meta Reward Learning (MeRL)** is a meta-learning method for the problem of learning from sparse and underspecified rewards. For example, an agent receives a complex input, such as a natural language instruction, and needs to generate a complex response, such as an action sequence, while only receiving binary success-failure feedback. The key insight of MeRL in dealing with underspecified rewards is that spurious trajectories and programs that achieve accidental success are detrimental to the agent's generalization performance. For example, an agent might be able to solve a specific instance of the maze problem above. However, if it learns to perform spurious actions during training, it is likely to fail when provided with unseen instructions. To mitigate this issue, MeRL optimizes a more refined auxiliary reward function, which can differentiate between accidental and purposeful success based on features of action trajectories. The auxiliary reward is optimized by maximizing the trained agent's performance on a hold-out validation set via meta learning.
Given the following machine learning model name: Simple Neural Attention Meta-Learner, provide a description of the model
The **Simple Neural Attention Meta-Learner**, or **SNAIL**, combines the benefits of temporal convolutions and attention to solve meta-learning tasks. They introduce positional dependence through temporal convolutions to make the model applicable to reinforcement tasks - where the observations, actions, and rewards are intrinsically sequential. They also introduce attention in order to provide pinpoint access over an infinitely large context. SNAIL is constructing by combining the two: we use temporal convolutions to produce the context over which we use a causal attention operation.
Given the following machine learning model name: PANet, provide a description of the model
**Path Aggregation Network**, or **PANet**, aims to boost information flow in a proposal-based instance segmentation framework. Specifically, the feature hierarchy is enhanced with accurate localization signals in lower layers by [bottom-up path augmentation](https://paperswithcode.com/method/bottom-up-path-augmentation), which shortens the information path between lower layers and topmost feature. Additionally, [adaptive feature pooling](https://paperswithcode.com/method/adaptive-feature-pooling) is employed, which links feature grid and all feature levels to make useful information in each feature level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction.
Given the following machine learning model name: Dynamic R-CNN, provide a description of the model
**Dynamic R-CNN** is an object detection method that adjusts the label assignment criteria (IoU threshold) and the shape of regression loss function (parameters of Smooth L1 Loss) automatically based on the statistics of proposals during training. The motivation is that in previous two-stage object detectors, there is an inconsistency problem between the fixed network settings and the dynamic training procedure. For example, the fixed label assignment strategy and regression loss function cannot fit the distribution change of proposals and thus are harmful to training high quality detectors. It consists of two components: Dynamic Label Assignment and Dynamic Smooth L1 Loss, which are designed for the classification and regression branches, respectively. For Dynamic Label Assignment, we want our model to be discriminative for high IoU proposals, so we gradually adjust the IoU threshold for positive/negative samples based on the proposals distribution in the training procedure. Specifically, we set the threshold as the IoU of the proposal at a certain percentage since it can reflect the quality of the overall distribution. For Dynamic Smooth L1 Loss, we want to change the shape of the regression loss function to adaptively fit the distribution change of error and ensure the contribution of high quality samples to training. This is achieved by adjusting the $\beta$ in Smooth L1 Loss based on the error distribution of the regression loss function, in which $\beta$ actually controls the magnitude of the gradient of small errors.
Given the following machine learning model name: Dual Contrastive Learning, provide a description of the model
Contrastive learning has achieved remarkable success in representation learning via self-supervision in unsupervised settings. However, effectively adapting contrastive learning to supervised learning tasks remains as a challenge in practice. In this work, we introduce a dual contrastive learning (DualCL) framework that simultaneously learns the features of input samples and the parameters of classifiers in the same space. Specifically, DualCL regards the parameters of the classifiers as augmented samples associating to different labels and then exploits the contrastive learning between the input samples and the augmented samples. Empirical studies on five benchmark text classification datasets and their low-resource version demonstrate the improvement in classification accuracy and confirm the capability of learning discriminative representations of DualCL.
Given the following machine learning model name: Multi-modal Teacher for Masked Modality Learning, provide a description of the model
Given the following machine learning model name: Progressive Neural Architecture Search, provide a description of the model
**Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pruning out unpromising structures as we go. At iteration $b$ of the algorithm, we have a set of $K$ candidate cells (each of size $b$ blocks), which we train and evaluate on a dataset of interest. Since this process is expensive, PNAS also learns a model or surrogate function which can predict the performance of a structure without needing to train it. We then expand the $K$ candidates of size $b$ into $K' \gg K$ children, each of size $b+1$. The surrogate function is used to rank all of the $K'$ children, pick the top $K$, and then train and evaluate them. We continue in this way until $b=B$, which is the maximum number of blocks we want to use in a cell.
Given the following machine learning model name: Group Decreasing Network, provide a description of the model
**Group Decreasing Network**, or **GroupDNet**, is a type of convolutional neural network for multi-modal image synthesis. GroupDNet contains one encoder and one decoder. Inspired by the idea of [VAE](https://paperswithcode.com/method/vae) and SPADE, the encoder $E$ produces a latent code $Z$ that is supposed to follow a Gaussian distribution $\mathcal{N}(0,1)$ during training. While testing, the encoder $E$ is discarded. A randomly sampled code from the Gaussian distribution substitutes for $Z$. To fulfill this, the re-parameterization trick is used to enable a differentiable loss function during training. Specifically, the encoder predicts a mean vector and a variance vector through two fully connected layers to represent the encoded distribution. The gap between the encoded distribution and Gaussian distribution can be minimized by imposing a KL-divergence loss.
Given the following machine learning model name: Synergistic Image and Feature Alignment, provide a description of the model
**Synergistic Image and Feature Alignment** is an unsupervised domain adaptation framework that conducts synergistic alignment of domains from both image and feature perspectives. In SIFA, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features by leveraging adversarial learning in multiple aspects and with a deeply supervised mechanism. The feature encoder is shared between both adaptive perspectives to leverage their mutual benefits via end-to-end learning.
Given the following machine learning model name: Corner Pooling, provide a description of the model
**Corner Pooling** is a pooling technique for object detection that seeks to better localize corners by encoding explicit prior knowledge. Suppose we want to determine if a pixel at location $\left(i, j\right)$ is a top-left corner. Let $f\_{t}$ and $f\_{l}$ be the feature maps that are the inputs to the top-left corner pooling layer, and let $f\_{t\_{ij}}$ and $f\_{l\_{ij}}$ be the vectors at location $\left(i, j\right)$ in $f\_{t}$ and $f\_{l}$ respectively. With $H \times W$ feature maps, the corner pooling layer first max-pools all feature vectors between $\left(i, j\right)$ and $\left(i, H\right)$ in $f\_{t}$ to a feature vector $t\_{ij}$ , and max-pools all feature vectors between $\left(i, j\right)$ and $\left(W, j\right)$ in $f\_{l}$ to a feature vector $l\_{ij}$. Finally, it adds $t\_{ij}$ and $l\_{ij}$ together.
Given the following machine learning model name: Aggregated Learning, provide a description of the model
**Aggregated Learning (AgrLearn)** is a vector-quantization approach to learning neural network classifiers. It builds on an equivalence between IB learning and IB quantization and exploits the power of vector quantization, which is well known in information theory.
Given the following machine learning model name: Bottom-up Path Augmentation, provide a description of the model
**Bottom-up Path Augmentation** is a feature extraction technique that seeks to shorten the information path and enhance a feature pyramid with accurate localization signals existing in low-levels. This is based on the fact that high response to edges or instance parts is a strong indicator to accurately localize instances. Each building block takes a higher resolution feature map $N\_{i}$ and a coarser map $P\_{i+1}$ through lateral connection and generates the new feature map $N\_{i+1}$ Each feature map $N\_{i}$ first goes through a $3 \times 3$ convolutional layer with stride $2$ to reduce the spatial size. Then each element of feature map $P\_{i+1}$ and the down-sampled map are added through lateral connection. The fused feature map is then processed by another $3 \times 3$ convolutional layer to generate $N\_{i+1}$ for following sub-networks. This is an iterative process and terminates after approaching $P\_{5}$. In these building blocks, we consistently use channel 256 of feature maps. The feature grid for each proposal is then pooled from new feature maps, i.e., {$N\_{2}$, $N\_{3}$, $N\_{4}$, $N\_{5}$}.
Given the following machine learning model name: Spatial Group-wise Enhance, provide a description of the model
**Spatial Group-wise Enhance** is a module for convolutional neural networks that can adjust the importance of each sub-feature by generating an attention factor for each spatial location in each semantic group, so that every individual group can autonomously enhance its learnt expression and suppress possible noise Inside each feature group, we model a spatial enhance mechanism inside each feature group, by scaling the feature vectors over all the locations with an attention mask. This attention mask is designed to suppress the possible noise and highlight the correct semantic feature regions. Different from other popular attention methods, it utilises the similarity between the global statistical feature and the local ones of each location as the source of generation for the attention masks.
Given the following machine learning model name: Cycle-CenterNet, provide a description of the model
**Cycle-CenterNet** is a table structure parsing approach built on [CenterNet](https://paperswithcode.com/method/centernet) that uses a cycle-pairing module to simultaneously detect and group tabular cells into structured tables. It also utilizes a pairing loss which enables the grouping of discrete cells into the structured tables.
Given the following machine learning model name: Affine Operator, provide a description of the model
The **Affine Operator** is an affine transformation layer introduced in the [ResMLP](https://paperswithcode.com/method/resmlp) architecture. This replaces [layer normalization](https://paperswithcode.com/method/layer-normalization), as in [Transformer based networks](https://paperswithcode.com/methods/category/transformers), which is possible since in the ResMLP, there are no [self-attention layers](https://paperswithcode.com/method/scaled) which makes training more stable - hence allowing a more simple affine transformation. The affine operator is defined as: $$ \operatorname{Aff}_{\mathbf{\alpha}, \mathbf{\beta}}(\mathbf{x})=\operatorname{Diag}(\mathbf{\alpha}) \mathbf{x}+\mathbf{\beta} $$ where $\alpha$ and $\beta$ are learnable weight vectors. This operation only rescales and shifts the input element-wise. This operation has several advantages over other normalization operations: first, as opposed to Layer Normalization, it has no cost at inference time, since it can absorbed in the adjacent linear layer. Second, as opposed to [BatchNorm](https://paperswithcode.com/method/batch-normalization) and Layer Normalization, the Aff operator does not depend on batch statistics.
Given the following machine learning model name: Linear Warmup With Cosine Annealing, provide a description of the model
**Linear Warmup With Cosine Annealing** is a learning rate schedule where we increase the learning rate linearly for $n$ updates and then anneal according to a cosine schedule afterwards.
Given the following machine learning model name: CodeSLAM, provide a description of the model
CodeSLAM represents the 3D geometry of a scene using the latent space of a variational autoencoder. The depth thus becomes a function of the RGB image and the unknown code, $D = G_\theta(I,c)$. During training time, the weights of the network $G_\theta$ are learnt by training the generator and encoder using a standard autoencoding task. At test time the code $c$ and the pose of the images is found by optimizing the reprojection error over multiple images.
Given the following machine learning model name: FuseFormer, provide a description of the model
**FuseFormer** is a [Transformer](https://paperswithcode.com/method/transformer)-based model designed for video inpainting via fine-grained feature fusion based on novel [Soft Split and Soft Composition](https://paperswithcode.com/method/soft-split-and-soft-composition) operations. The soft split divides feature map into many patches with given overlapping interval while the soft composition stitches them back into a whole feature map where pixels in overlapping regions are summed up. FuseFormer builds soft composition and soft split into its [feedforward network](https://paperswithcode.com/method/feedforward-network) for further enhancing subpatch level feature fusion.
Given the following machine learning model name: Ape-X DQN, provide a description of the model
**Ape-X DQN** is a variant of a [DQN](https://paperswithcode.com/method/dqn) with some components of [Rainbow-DQN](https://paperswithcode.com/method/rainbow-dqn) that utilizes distributed [prioritized experience replay](https://paperswithcode.com/method/prioritized-experience-replay) through the [Ape-X](https://paperswithcode.com/method/ape-x) architecture.
Given the following machine learning model name: PP-OCR, provide a description of the model
**PP-OCR** is an OCR system that consists of three parts, text detection, detected boxes rectification and text recognition. The purpose of text detection is to locate the text area in the image. In PP-OCR, Differentiable Binarization (DB) is used as text detector which is based on a simple segmentation network. It integrates feature extraction and sequence modeling. It adopts the Connectionist Temporal Classification (CTC) loss to avoid the inconsistency between prediction and label.
Given the following machine learning model name: CenterPoint, provide a description of the model
**CenterPoint** is a two-stage 3D detector that finds centers of objects and their properties using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation and velocity. In a second-stage, it refines these estimates using additional point features on the object. CenterPoint uses a standard Lidar-based backbone network, i.e., VoxelNet or PointPillars, to build a representation of the input point-cloud. CenterPoint predicts the relative offset (velocity) of objects between consecutive frames, which are then linked up greedily -- so in Centerpoint, 3D object tracking simplifies to greedy closest-point matching.
Given the following machine learning model name: Shifted Rectified Linear Unit, provide a description of the model
The **Shifted Rectified Linear Unit**, or **ShiLU**, is a modification of **[ReLU](https://paperswithcode.com/method/relu)** activation function that has trainable parameters. $$ShiLU(x) = \alpha ReLU(x) + \beta$$
Given the following machine learning model name: Selective Kernel Convolution, provide a description of the model
A **Selective Kernel Convolution** is a [convolution](https://paperswithcode.com/method/convolution) that enables neurons to adaptively adjust their RF sizes among multiple kernels with different kernel sizes. Specifically, the SK convolution has three operators – Split, Fuse and Select. Multiple branches with different kernel sizes are fused using [softmax](https://paperswithcode.com/method/softmax) attention that is guided by the information in these branches. Different attentions on these branches yield different sizes of the effective receptive fields of neurons in the fusion layer
Given the following machine learning model name: Tree Ensemble to Rules, provide a description of the model
A method to convert a Tree Ensemble model into a Rule list. This makes the AI model more transparent.
Given the following machine learning model name: ResNeXt-Elastic, provide a description of the model
**ResNeXt-Elastic** is a convolutional neural network that is a modification of a [ResNeXt](https://paperswithcode.com/method/resnext) with elastic blocks (extra upsampling and downsampling).
Given the following machine learning model name: Compact Convolutional Transformers, provide a description of the model
**Compact Convolutional Transformers** utilize sequence pooling and replace the patch embedding with a convolutional embedding, allowing for better inductive bias and making positional embeddings optional. CCT achieves better accuracy than ViT-Lite (smaller ViTs) and increases the flexibility of the input parameters.
Given the following machine learning model name: Max Pooling, provide a description of the model
**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. Image Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)
Given the following machine learning model name: Hierarchical BiLSTM Max Pooling, provide a description of the model
HBMP is a hierarchy-like structure of [BiLSTM](https://paperswithcode.com/method/bilstm) layers with [max pooling](https://paperswithcode.com/method/max-pooling). All in all, this model improves the previous state of the art for SciTail and achieves strong results for the SNLI and MultiNLI.
Given the following machine learning model name: Causal inference, provide a description of the model
Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.
Given the following machine learning model name: Conditional Variational Auto Encoder, provide a description of the model
Given the following machine learning model name: Synaptic Neural Network, provide a description of the model
A Synaptic Neural Network (SynaNN) consists of synapses and neurons. Inspired by the synapse research of neuroscience, we built a synapse model with a nonlinear and log-concave synapse function of excitatory and inhibitory probabilities of channels.
Given the following machine learning model name: Adaptively Spatial Feature Fusion, provide a description of the model
**ASFF**, or **Adaptively Spatial Feature Fusion**, is a method for pyramidal feature fusion. It learns the way to spatially filter conflictive information to suppress inconsistency across different feature scales, thus improving the scale-invariance of features. ASFF enables the network to directly learn how to spatially filter features at other levels so that only useful information is kept for combination. For the features at a certain level, features of other levels are first integrated and resized into the same resolution and then trained to find the optimal fusion. At each spatial location, features at different levels are fused adaptively, *i.e.*, some features may be filter out as they carry contradictory information at this location and some may dominate with more discriminative clues. ASFF offers several advantages: (1) as the operation of searching the optimal fusion is differential, it can be conveniently learned in back-propagation; (2) it is agnostic to the backbone model and it is applied to single-shot detectors that have a feature pyramid structure; and (3) its implementation is simple and the increased computational cost is marginal. Let $\mathbf{x}_{ij}^{n\rightarrow l}$ denote the feature vector at the position $(i,j)$ on the feature maps resized from level $n$ to level $l$. Following a feature resizing stage, we fuse the features at the corresponding level $l$ as follows: $$ \mathbf{y}\_{ij}^l = \alpha^l_{ij} \cdot \mathbf{x}\_{ij}^{1\rightarrow l} + \beta^l_{ij} \cdot \mathbf{x}\_{ij}^{2\rightarrow l} +\gamma^l\_{ij} \cdot \mathbf{x}\_{ij}^{3\rightarrow l}, $$ where $\mathbf{y}\_{ij}^l$ implies the $(i,j)$-th vector of the output feature maps $\mathbf{y}^l$ among channels. $\alpha^l\_{ij}$, $\beta^l\_{ij}$ and $\gamma^l\_{ij}$ refer to the spatial importance weights for the feature maps at three different levels to level $l$, which are adaptively learned by the network. Note that $\alpha^l\_{ij}$, $\beta^l\_{ij}$ and $\gamma^l\_{ij}$ can be simple scalar variables, which are shared across all the channels. Inspired by acnet, we force $\alpha^l\_{ij}+\beta^l\_{ij}+\gamma^l\_{ij}=1$ and $\alpha^l\_{ij},\beta^l\_{ij},\gamma^l\_{ij} \in [0,1]$, and $$ \alpha^l_{ij} = \frac{e^{\lambda^l\_{\alpha\_{ij}}}}{e^{\lambda^l\_{\alpha_{ij}}} + e^{\lambda^l\_{\beta_{ij} }} + e^{\lambda^l\_{\gamma_{ij}}}}. $$ Here $\alpha^l\_{ij}$, $\beta^l\_{ij}$ and $\gamma^l\_{ij}$ are defined by using the [softmax](https://paperswithcode.com/method/softmax) function with $\lambda^l\_{\alpha_{ij}}$, $\lambda^l\_{\beta_{ij}}$ and $\lambda^l\_{\gamma_{ij}}$ as control parameters respectively. We use $1\times1$ [convolution](https://paperswithcode.com/method/convolution) layers to compute the weight scalar maps $\mathbf{\lambda}^l_\alpha$, $\mathbf{\lambda}^l\_\beta$ and $\mathbf{\lambda}^l\_\gamma$ from $\mathbf{x}^{1\rightarrow l}$, $\mathbf{x}^{2\rightarrow l}$ and $\mathbf{x}^{3\rightarrow l}$ respectively, and they can thus be learned through standard back-propagation. With this method, the features at all the levels are adaptively aggregated at each scale. The outputs are used for object detection following the same pipeline of [YOLOv3](https://paperswithcode.com/method/yolov3).
Given the following machine learning model name: DenseNAS-A, provide a description of the model
**DenseNAS-A** is a mobile convolutional neural network discovered through the [DenseNAS](https://paperswithcode.com/method/densenas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building block is MBConvs, or inverted bottleneck residuals, from the MobileNet architectures.
Given the following machine learning model name: CSPPeleeNet, provide a description of the model
**CSPPeleeNet** is a convolutional neural network and object detection backbone where we apply the Cross Stage Partial Network (CSPNet) approach to [PeleeNet](https://paperswithcode.com/method/peleenet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network.