prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: Fractal Block, provide a description of the model | A **Fractal Block** is an image model block that utilizes an expansion rule that yields a structural layout of truncated fractals. For the base case where $f\_{1}\left(z\right) = \text{conv}\left(z\right)$ is a convolutional layer, we then have recursive fractals of the form:
$$ f\_{C+1}\left(z\right) = \left[\left(f\_{C}\circ{f\_{C}}\right)\left(z\right)\right] \oplus \left[\text{conv}\left(z\right)\right]$$
Where $C$ is the number of columns. For the join layer (green in Figure), we use the element-wise mean rather than concatenation or addition. |
Given the following machine learning model name: DVD-GAN GBlock, provide a description of the model | **DVD-GAN GBlock** is a [residual block](https://paperswithcode.com/method/residual-block) for the generator used in the [DVD-GAN](https://paperswithcode.com/method/dvd-gan) architecture for video generation. |
Given the following machine learning model name: Siamese U-Net, provide a description of the model | Siamese U-Net model with a pre-trained ResNet34 architecture as an encoder for data efficient Change Detection |
Given the following machine learning model name: ReLU6, provide a description of the model | **ReLU6** is a modification of the [rectified linear unit](https://paperswithcode.com/method/relu) where we limit the activation to a maximum size of $6$. This is due to increased robustness when used with low-precision computation.
Image Credit: [PyTorch](https://pytorch.org/docs/master/generated/torch.nn.ReLU6.html) |
Given the following machine learning model name: Auditory Cortex ResNet, provide a description of the model | The Auditory Cortex ResNet, briefly AUCO ResNet, is proposed and tested. It is a deep neural network architecture especially designed for audio classification trained end-to-end. It is inspired by the architectural organization of rat's auditory cortex, containing also innovations 2 and 3. The network outperforms the state-of-the-art accuracies on a reference audio benchmark dataset without any kind of preprocessing, imbalanced data handling and, most importantly, any kind of data augmentation. |
Given the following machine learning model name: Crossmodal Contrastive Learning, provide a description of the model | **CMCL**, or **Crossmodal Contrastive Learning**, is a method for unifying visual and textual representations into the same semantic space based on a large-scale corpus of image collections, text corpus and image-text pairs. The CMCL aligns the visual representations and textual representations, and unifies them into the same semantic space based on image-text pairs. As shown in the Figure, to facilitate different levels of semantic alignment between vision and language, a series of text rewriting techniques are utilized to improve the diversity of cross-modal information. Specifically, for an image-text pair, various positive examples and hard negative examples can be obtained by rewriting the original caption at different levels. Moreover, to incorporate more background information from the single-modal data, text and image retrieval are also applied to augment each image-text pair with various related texts and images. The positive pairs, negative pairs, related images and texts are learned jointly by CMCL. In this way, the model can effectively unify different levels of visual and textual representations into the same semantic space, and incorporate more single-modal knowledge to enhance each other. |
Given the following machine learning model name: SNIPER, provide a description of the model | **SNIPER** is a multi-scale training approach for instance-level recognition tasks like object detection and instance-level segmentation. Instead of processing all pixels in an image pyramid, SNIPER selectively processes context regions around the ground-truth objects (a.k.a chips). This can help to speed up multi-scale training as it operates on low-resolution chips. Due to its memory-efficient design, SNIPER can benefit from [Batch Normalization](https://paperswithcode.com/method/batch-normalization) during training and it makes larger batch-sizes possible for instance-level recognition tasks on a single GPU. |
Given the following machine learning model name: Morphence, provide a description of the model | **Morphence** is an approach for adversarial defense that shifts the defense landscape by making a model a moving target against adversarial examples. By regularly moving the decision function of a model, Morphence makes it significantly challenging for repeated or correlated attacks to succeed. Morphence deploys a pool of models generated from a base model in a manner that introduces sufficient randomness when it responds to prediction queries. To ensure repeated or correlated attacks fail, the deployed pool of models automatically expires after a query budget is reached and the model pool is replaced by a new model pool generated in advance. |
Given the following machine learning model name: EfficientUNet++, provide a description of the model | Decoder architecture inspired on the [UNet++](https://paperswithcode.com/method/unet) structure and the [EfficientNet](https://paperswithcode.com/method/efficientnet) building blocks. Keeping the UNet++ structure, the EfficientUNet++ achieves higher performance and significantly lower computational complexity through two simple modifications:
* Replaces the 3x3 convolutions of the UNet++ with residual bottleneck blocks with depthwise convolutions
* Applies channel and spatial attention to the bottleneck feature maps using [concurrent spatial and channel squeeze & excitation (scSE)](https://paperswithcode.com/method/scse) blocks |
Given the following machine learning model name: U2-Net, provide a description of the model | **U2-Net** is a two-level nested U-structure architecture that is designed for salient object detection (SOD). The architecture allows the network to go deeper, attain high resolution, without significantly increasing the memory and computation cost. This is achieved by a nested U-structure: on the bottom level, with a novel ReSidual U-block (RSU) module, which is able to extract intra-stage multi-scale features without degrading the feature map resolution; on the top level, there is a [U-Net](https://paperswithcode.com/method/u-net) like structure, in which each stage is filled by a RSU block. |
Given the following machine learning model name: Neural Architecture Search, provide a description of the model | **Neural Architecture Search (NAS)** learns a modular architecture which can be transferred from a small dataset to a large dataset. The method does this by reducing the problem of learning best convolutional architectures to the problem of learning a small convolutional cell. The cell can then be stacked in series to handle larger images and more complex datasets.
Note that this refers to the original method referred to as NAS - there is also a broader category of methods called "neural architecture search". |
Given the following machine learning model name: Efficient Recurrent Unit, provide a description of the model | An **Efficient Recurrent Unit (ERU)** extends [LSTM](https://paperswithcode.com/method/mrnn)-based language models by replacing linear transforms for processing the input vector with the [EESP](https://paperswithcode.com/method/eesp) unit inside the [LSTM](https://paperswithcode.com/method/lstm) cell. |
Given the following machine learning model name: Transformer, provide a description of the model | A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks). |
Given the following machine learning model name: Convolutional Hough Matching, provide a description of the model | **Convolutional Hough Matching**, or **CHM**, is a geometric matching algorithm that distributes similarities of candidate matches over a geometric transformation space and evaluates them in a convolutional manner. It is casted into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns non-rigid matching with a small number of interpretable parameters. |
Given the following machine learning model name: Language-driven Scene Synthesis using Multi-conditional Diffusion Model, provide a description of the model | Our main contribution is the Guiding Points Network, where we
integrate all information from the conditions to generate guiding points. |
Given the following machine learning model name: DV3 Attention Block, provide a description of the model | **DV3 Attention Block** is an attention-based module used in the [Deep Voice 3](https://paperswithcode.com/method/deep-voice-3) architecture. It uses a [dot-product attention](https://paperswithcode.com/method/dot-product-attention) mechanism. A query vector (the hidden states of the decoder) and the per-timestep key vectors from the encoder are used to compute attention weights. This then outputs a context vector computed as the weighted average of the value vectors. |
Given the following machine learning model name: Singular Value Decomposition Parameterization, provide a description of the model | |
Given the following machine learning model name: DE-GAN: A Conditional Generative Adversarial Network for Document Enhancement, provide a description of the model | Documents often exhibit various forms of degradation, which make it hard to be read and substantially deteriorate the
performance of an OCR system. In this paper, we propose an effective end-to-end framework named Document Enhancement
Generative Adversarial Networks (DE-GAN) that uses the conditional GANs (cGANs) to restore severely degraded document images.
To the best of our knowledge, this practice has not been studied within the context of generative adversarial deep networks. We
demonstrate that, in different tasks (document clean up, binarization, deblurring and watermark removal), DE-GAN can produce an
enhanced version of the degraded document with a high quality. In addition, our approach provides consistent improvements compared to state-of-the-art methods over the widely used DIBCO 2013, DIBCO 2017 and H-DIBCO 2018 datasets, proving its ability to restore a degraded document image to its ideal condition. The obtained results on a wide variety of degradation reveal the flexibility of the proposed model to be exploited in other document enhancement problems. |
Given the following machine learning model name: Layer Normalization, provide a description of the model | Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.
We compute the layer normalization statistics over all the hidden units in the same layer as follows:
$$ \mu^{l} = \frac{1}{H}\sum^{H}\_{i=1}a\_{i}^{l} $$
$$ \sigma^{l} = \sqrt{\frac{1}{H}\sum^{H}\_{i=1}\left(a\_{i}^{l}-\mu^{l}\right)^{2}} $$
where $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\mu$ and $\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1. |
Given the following machine learning model name: Variance-based Feature Importance of Artificial Neural Networks, provide a description of the model | |
Given the following machine learning model name: Goal-Driven Tree-Structured Neural Model, provide a description of the model | |
Given the following machine learning model name: Optimizer Activation Function, provide a description of the model | A new activation function named NIPUNA : f(x)=max〖(g(x),x)〗 where g(x)=x/(〖(1+e〗^(-βx))) |
Given the following machine learning model name: Random Horizontal Flip, provide a description of the model | **RandomHorizontalFlip** is a type of image data augmentation which horizontally flips a given image with a given probability.
Image Credit: [Apache MXNet](https://mxnet.apache.org/versions/1.5.0/tutorials/gluon/data_augmentation.html) |
Given the following machine learning model name: Topographic VAE, provide a description of the model | **Topographic VAE** is a method for efficiently training deep generative models with topographically organized latent variables. The model learns sets of approximately equivariant features (i.e. "capsules") directly from sequences and achieves higher likelihood on correspondingly transforming test sequences. The combined color/rotation transformation in input space $\tau\_{g}$ becomes encoded as a $\mathrm{Roll}$ within the capsule dimension. The model is thus able decode unseen sequence elements by encoding a partial sequence and Rolling activations within the capsules. This resembles a commutative diagram. |
Given the following machine learning model name: Contrastive Cross-View Mutual Information Maximization, provide a description of the model | **CV-MIM**, or **Contrastive Cross-View Mutual Information Maximization**, is a representation learning method to disentangle pose-dependent as well as view-dependent factors from 2D human poses. The method trains a network using cross-view mutual information maximization, which maximizes mutual information of the same pose performed from different viewpoints in a contrastive learning manner. It further utilizes two regularization terms to ensure disentanglement and smoothness of the learned representations. |
Given the following machine learning model name: ShapeConv, provide a description of the model | **ShapeConv**, or **Shape-aware Convolutional layer**, is a convolutional layer for processing the depth feature in indoor RGB-D semantic segmentation. The depth feature is firstly decomposed into a shape-component and a base-component, next two learnable weights are introduced to cooperate with them independently, and finally a [convolution](https://paperswithcode.com/method/convolution) is applied on the re-weighted combination of these two components. |
Given the following machine learning model name: Mirror Descent Policy Optimization, provide a description of the model | **Mirror Descent Policy Optimization (MDPO)** is a policy gradient algorithm based on the idea of iteratively solving a trust-region problem that minimizes a sum of two terms: a linearization of the standard RL objective function and a proximity term that restricts two consecutive updates to be close to each other. It is based on Mirror Descent, which is a general trust region method that
attempts to keep consecutive iterates close to each other. |
Given the following machine learning model name: Deep Belief Network, provide a description of the model | A **Deep Belief Network (DBN)** is a multi-layer generative graphical model. DBNs have bi-directional connections ([RBM](https://paperswithcode.com/method/restricted-boltzmann-machine)-type connections) on the top layer while the bottom layers only have top-down connections. They are trained using layerwise pre-training. Pre-training occurs by training the network component by component bottom up: treating the first two layers as an RBM and training, then treating the second layer and third layer as another RBM and training for those parameters.
Source: [Origins of Deep Learning](https://arxiv.org/pdf/1702.07800.pdf)
Image Source: [Wikipedia](https://en.wikipedia.org/wiki/Deep_belief_network) |
Given the following machine learning model name: Capsule Network, provide a description of the model | A capsule is an activation vector that basically executes on its inputs some complex internal
computations. Length of these activation vectors signifies the
probability of availability of a feature. Furthermore, the condition
of the recognized element is encoded as the direction in which
the vector is pointing. In traditional, CNN uses Max pooling for
invariance activities of neurons, which is nothing except a minor
change in input and the neurons of output signal will remains
same. |
Given the following machine learning model name: CubeRE, provide a description of the model | Our model known as CubeRE first encodes each input sentence using a language model encoder to obtain the contextualized sequence representation. We then capture the interaction between each possible head and tail entity as a pair representation for predicting the entity-relation label scores. To reduce the computational cost, each sentence is pruned to retain only words that have higher entity scores. Finally, we capture the interaction between each possible relation triplet and qualifier to predict the qualifier label scores and decode the outputs. |
Given the following machine learning model name: Hierarchical Average Precision training for Pertinent ImagE Retrieval, provide a description of the model | |
Given the following machine learning model name: Replica exchange stochastic gradient Langevin Dynamics, provide a description of the model | reSGLD proposes to simulate a high-temperature particle for exploration and a low-temperature particle for exploitation and allows them to swap simultaneously. Moreover, a correction term is included to avoid biases. |
Given the following machine learning model name: Sarsa, provide a description of the model | **Sarsa** is an on-policy TD control algorithm:
$$Q\left(S\_{t}, A\_{t}\right) \leftarrow Q\left(S\_{t}, A\_{t}\right) + \alpha\left[R_{t+1} + \gamma{Q}\left(S\_{t+1}, A\_{t+1}\right) - Q\left(S\_{t}, A\_{t}\right)\right] $$
This update is done after every transition from a nonterminal state $S\_{t}$. if $S\_{t+1}$ is terminal, then $Q\left(S\_{t+1}, A\_{t+1}\right)$ is defined as zero.
To design an on-policy control algorithm using Sarsa, we estimate $q\_{\pi}$ for a behaviour policy $\pi$ and then change $\pi$ towards greediness with respect to $q\_{\pi}$.
Source: Sutton and Barto, Reinforcement Learning, 2nd Edition |
Given the following machine learning model name: Table Pre-training via Execution, provide a description of the model | TAPEX is a conceptually simple and empirically powerful pre-training approach to empower existing models with table reasoning skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesising executable SQL queries. |
Given the following machine learning model name: Convolutional time-domain audio separation network, provide a description of the model | Combines learned time-frequency representation with a masker architecture based on 1D [dilated convolution](https://paperswithcode.com/method/dilated-convolution). |
Given the following machine learning model name: Enhanced Sequential Inference Model, provide a description of the model | **Enhanced Sequential Inference Model** or **ESIM** is a sequential NLI model proposed in [Enhanced LSTM for Natural Language Inference](https://www.aclweb.org/anthology/P17-1152) paper. |
Given the following machine learning model name: Sym-NCO, provide a description of the model | |
Given the following machine learning model name: Contractive Autoencoder, provide a description of the model | A **Contractive Autoencoder** is an autoencoder that adds a penalty term to the classical reconstruction cost function. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. This penalty term results in a localized space contraction which in turn yields robust features on the activation layer. The penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. |
Given the following machine learning model name: Adaptive Softmax, provide a description of the model | **Adaptive Softmax** is a speedup technique for the computation of probability distributions over words. The adaptive [softmax](https://paperswithcode.com/method/softmax) is inspired by the class-based [hierarchical softmax](https://paperswithcode.com/method/hierarchical-softmax), where the word classes are built to minimize the computation time. Adaptive softmax achieves efficiency by explicitly taking into account the computation time of matrix-multiplication on parallel systems and combining it with a few important observations, namely keeping a shortlist of frequent words in the root node
and reducing the capacity of rare words. |
Given the following machine learning model name: InfoNCE, provide a description of the model | **InfoNCE**, where NCE stands for Noise-Contrastive Estimation, is a type of contrastive loss function used for [self-supervised learning](https://paperswithcode.com/methods/category/self-supervised-learning).
Given a set $X = ${$x\_{1}, \dots, x\_{N}$} of $N$ random samples containing one positive sample from $p\left(x\_{t+k}|c\_{t}\right)$ and $N − 1$ negative samples from the 'proposal' distribution $p\left(x\_{t+k}\right)$, we optimize:
$$ \mathcal{L}\_{N} = - \mathbb{E}\_{X}\left[\log\frac{f\_{k}\left(x\_{t+k}, c\_{t}\right)}{\sum\_{x\_{j}\in{X}}f\_{k}\left(x\_{j}, c\_{t}\right)}\right] $$
Optimizing this loss will result in $f\_{k}\left(x\_{t+k}, c\_{t}\right)$ estimating the density ratio, which is:
$$ f\_{k}\left(x\_{t+k}, c\_{t}\right) \propto \frac{p\left(x\_{t+k}|c\_{t}\right)}{p\left(x\_{t+k}\right)} $$ |
Given the following machine learning model name: CT3D, provide a description of the model | **CT3D** is a two-stage 3D object detection framework that leverages a high-quality region proposal network and a Channel-wise [Transformer](https://paperswithcode.com/method/transformer) architecture. The proposed CT3D simultaneously performs proposal-aware embedding and channel-wise context aggregation for the point features within each proposal. Specifically, CT3D uses a proposal's keypoints for spatial contextual modelling and learns attention propagation in the encoding module, mapping the proposal to point embeddings. Next, a new channel-wise decoding module enriches the query-key interaction via channel-wise re-weighting to effectively merge multi-level contexts, which contributes to more accurate object predictions.
In CT3D, the raw points are first fed into the [RPN](https://paperswithcode.com/method/rpn) for generating 3D proposals. Then the raw points along with the corresponding proposals are processed by the channel-wise Transformer composed of the proposal-to-point encoding module and the channel-wise decoding module. Specifically, the proposal-to-point encoding module is to modulate each point feature with global proposal-aware context information. After that, the encoded point features are transformed into an effective proposal feature representation by the
channel-wise decoding module for confidence prediction and box regression. |
Given the following machine learning model name: Dual Path Network, provide a description of the model | A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while [DenseNet](https://paperswithcode.com/method/densenet) enables new feature exploration, and both are important for learning good representations. To enjoy the benefits from both path topologies, Dual Path Networks share common features while maintaining the flexibility to explore new features through dual path architectures.
We formulate such a dual path architecture as follows:
$$x^{k} = \sum\limits\_{t=1}^{k-1} f\_t^{k}(h^t) \text{,} $$
$$
y^{k} = \sum\limits\_{t=1}^{k-1} v\_t(h^t) = y^{k-1} + \phi^{k-1}(y^{k-1}) \text{,} \\\\
$$
$$
r^{k} = x^{k} + y^{k} \text{,} \\\\
$$
$$
h^k = g^k \left( r^{k} \right) \text{,}
$$
where $x^{k}$ and $y^{k}$ denote the extracted information at $k$-th step from individual path, $v_t(\cdot)$ is a feature learning function as $f_t^k(\cdot)$. The first equation refers to the densely connected path that enables exploring new features. The second equation refers to the residual path that enables common features re-usage. The third equation defines the dual path that integrates them and feeds them to the last transformation function in the last equation. |
Given the following machine learning model name: Single-Shot Multi-Object Tracker, provide a description of the model | **Single-Shot Multi-Object Tracker** or **SMOT**, is a tracking framework that converts any single-shot detector (SSD) model into an online multiple object tracker, which emphasizes simultaneously detecting and tracking of the object paths. Contrary to the existing tracking by detection approaches which suffer from errors made by the object detectors, SMOT adopts the recently proposed scheme of tracking by re-detection.
The proposed SMOT consists of two stages. The first stage generates temporally consecutive tracklets by exploring the temporal and spatial correlations from previous frame. The second stage performs online linking of the tracklets to generate a face track for each person (better view in color). |
Given the following machine learning model name: HardELiSH, provide a description of the model | **HardELiSH** is an activation function for neural networks. The HardELiSH is a multiplication of the [HardSigmoid](https://paperswithcode.com/method/hard-sigmoid) and [ELU](https://paperswithcode.com/method/elu) in the negative part and a multiplication of the Linear and the HardSigmoid in the positive
part:
$$f\left(x\right) = x\max\left(0, \min\left(1, \left(\frac{x+1}{2}\right)\right) \right) \text{ if } x \geq 1$$
$$f\left(x\right) = \left(e^{x}-1\right)\max\left(0, \min\left(1, \left(\frac{x+1}{2}\right)\right)\right) \text{ if } x < 0 $$
Source: [Activation Functions](https://arxiv.org/pdf/1811.03378.pdf) |
Given the following machine learning model name: Spatial and Channel SE Blocks, provide a description of the model | To aggregate global spatial information,
an SE block applies global pooling to the feature map.
However, it ignores pixel-wise spatial information,
which is important in dense prediction tasks.
Therefore, Roy et al. proposed
spatial and channel SE blocks (scSE).
Like BAM, spatial SE blocks are used, complementing SE blocks,
to provide spatial attention weights to focus on important regions.
Given the input feature map $X$, two parallel modules, spatial SE and channel SE, are applied to feature maps to encode spatial and channel information respectively. The channel SE module is an ordinary SE block, while the spatial SE module adopts $1\times 1$ convolution for spatial squeezing. The outputs from the two modules are fused. The overall process can be written as
\begin{align}
s_c & = \sigma (W_{2} \delta (W_{1}\text{GAP}(X)))
\end{align}
\begin{align}
X_\text{chn} & = s_c X
\end{align}
\begin{align}
s_s &= \sigma(\text{Conv}^{1\times 1}(X))
\end{align}
\begin{align}
X_\text{spa} & = s_s X
\end{align}
\begin{align}
Y &= f(X_\text{spa},X_\text{chn})
\end{align}
where $f$ denotes the fusion function, which can be maximum, addition, multiplication or concatenation.
The proposed scSE block combines channel and spatial attention to
enhance features as well as
capturing pixel-wise spatial information.
Segmentation tasks are greatly benefited as a result.
The integration of an scSE block in F-CNNs makes a consistent improvement in semantic segmentation at negligible extra cost. |
Given the following machine learning model name: CRF-RNN, provide a description of the model | **CRF-RNN** is a formulation of a [CRF](https://paperswithcode.com/method/crf) as a Recurrent Neural Network. Specifically it formulates mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks. |
Given the following machine learning model name: Two Time-scale Update Rule, provide a description of the model | The **Two Time-scale Update Rule (TTUR)** is an update rule for generative adversarial networks trained with stochastic gradient descent. TTUR has an individual learning rate for both the discriminator and the generator. The main premise is that the discriminator converges to a local minimum when the generator is fixed. If the generator changes slowly enough, then the discriminator still converges, since the generator perturbations are small. Besides ensuring convergence, the performance may also improve since the discriminator must first learn new patterns before they are transferred to the generator. In contrast, a generator which is overly fast, drives the discriminator steadily into new regions without capturing its gathered information. |
Given the following machine learning model name: Mish, provide a description of the model | **Mish** is an activation function for neural networks which can be defined as:
$$ f\left(x\right) = x\cdot\tanh{\text{softplus}\left(x\right)}$$
where
$$\text{softplus}\left(x\right) = \ln\left(1+e^{x}\right)$$
(Compare with functionally similar previously proposed activation functions such as the [GELU](https://paperswithcode.com/method/silu) $x\Phi(x)$ and the [SiLU](https://paperswithcode.com/method/silu) $x\sigma(x)$.) |
Given the following machine learning model name: Video Panoptic Segmentation Network, provide a description of the model | **Video Panoptic Segmentation Network**, or **VPSNet**, is a model for video panoptic segmentation. On top of UPSNet, which is a method for image panoptic segmentation, VPSNet is designed to take an additional frame as the reference to correlate time information at two levels: pixel-level fusion and object-level tracking. To pick up the complementary feature points in the reference frame, a flow-based feature map alignment module is introduced along with an asymmetric attention block that computes similarities between the target and reference features to fuse them into one-frame shape. Additionally, to associate object instances across time,
an object track head is added which learns the correspondence between the instances in the target and reference frames based
on their RoI feature similarity. |
Given the following machine learning model name: (2+1)D Convolution, provide a description of the model | A **(2+1)D Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) used for action recognition convolutional neural networks, with a spatiotemporal volume. As opposed to applying a [3D Convolution](https://paperswithcode.com/method/3d-convolution) over the entire volume, which can be computationally expensive and lead to overfitting, a (2+1)D convolution splits computation into two convolutions: a spatial 2D convolution followed by a temporal 1D convolution. |
Given the following machine learning model name: Adaptive Graph Convolutional Neural Networks, provide a description of the model | AGCN is a novel spectral graph convolution network that feed on original data of diverse graph structures.
Image credit: [Adaptive Graph Convolutional Neural Networks](https://arxiv.org/pdf/1801.03226.pdf) |
Given the following machine learning model name: Inverse Q-Learning, provide a description of the model | **Inverse Q-Learning (IQ-Learn)** is a a simple, stable & data-efficient framework for Imitation Learning (IL), that directly learns *soft Q-functions* from expert data. IQ-Learn enables **non-adverserial** imitation learning, working on both offline and online IL settings. It is performant even with very sparse expert data, and scales to complex image-based environments, surpassing prior methods by more than **3x**.
It is very simple to implement requiring ~15 lines of code on top of existing RL methods.
<span class="description-source">Source: [IQ-Learn: Inverse soft Q-Learning for Imitation](https://arxiv.org/abs/2106.12142)</span> |
Given the following machine learning model name: EsViT, provide a description of the model | **EsViT** proposes two techniques for developing efficient self-supervised vision transformers for visual representation leaning: a multi-stage architecture with sparse self-attention and a new pre-training task of region matching. The multi-stage architecture reduces modeling complexity but with a cost of losing the ability to capture fine-grained correspondences between image regions. The new pretraining task allows the model to capture fine-grained region dependencies and as a result significantly improves the quality of the learned vision representations. |
Given the following machine learning model name: CentripetalNet, provide a description of the model | **CentripetalNet** is a keypoint-based detector which uses centripetal shift to pair corner keypoints from the same instance. CentripetalNet predicts the position and the centripetal shift of the corner points and matches corners whose shifted results are aligned. |
Given the following machine learning model name: Blue River Controls, provide a description of the model | **Blue River Controls** is a tool that allows users to train and test reinforcement learning algorithms on real-world hardware. It features a simple interface based on OpenAI Gym, that works directly on both simulation and hardware. |
Given the following machine learning model name: ENet Dilated Bottleneck, provide a description of the model | **ENet Dilated Bottleneck** is an image model block used in the [ENet](https://paperswithcode.com/method/enet) semantic segmentation architecture. It is the same as a regular [ENet Bottleneck](https://paperswithcode.com/method/enet-bottleneck) but employs dilated convolutions instead. |
Given the following machine learning model name: GeGLU, provide a description of the model | **GeGLU** is an activation function which is a variant of [GLU](https://paperswithcode.com/method/glu). The definition is as follows:
$$ \text{GeGLU}\left(x, W, V, b, c\right) = \text{GELU}\left(xW + b\right) \otimes \left(xV + c\right) $$ |
Given the following machine learning model name: Smish, provide a description of the model | Smish is an activation function defined as $f(x)=x\cdot \text{tanh}(\ln(1+\sigma(x)))$ where $\sigma(x)$ denotes the sigmoid function. A parameterized version was also described in the form $f(x)=\alpha x\cdot \text{tanh}(\ln(1+\sigma(\beta x)))$.
Paper: Smish: A Novel Activation Function for Deep Learning Methods
Source: https://www.mdpi.com/2079-9292/11/4/540 |
Given the following machine learning model name: Leaky ReLU, provide a description of the model | **Leaky Rectified Linear Unit**, or **Leaky ReLU**, is a type of activation function based on a [ReLU](https://paperswithcode.com/method/relu), but it has a small slope for negative values instead of a flat slope. The slope coefficient is determined before training, i.e. it is not learnt during training. This type of activation function is popular in tasks where we may suffer from sparse gradients, for example training generative adversarial networks. |
Given the following machine learning model name: modified arcsinh, provide a description of the model | |
Given the following machine learning model name: Spatial Feature Transform, provide a description of the model | **Spatial Feature Transform**, or **SFT**, is a layer that generates affine transformation parameters for spatial-wise feature modulation, and was originally proposed within the context of image super-resolution. A Spatial Feature Transform (SFT) layer learns a mapping function $\mathcal{M}$ that outputs a modulation parameter pair $(\mathbf{\gamma}, \mathbf{\beta})$ based on some prior condition $\Psi$. The learned parameter pair adaptively influences the outputs by applying an affine transformation spatially to each intermediate feature maps in an SR network. During testing, only a single forward pass is needed to generate the HR image given the LR input and segmentation probability maps.
More precisely, the prior $\Psi$ is modeled by a pair of affine transformation parameters $(\mathbf{\gamma}, \mathbf{\beta})$ through a mapping function $\mathcal{M}: \Psi \mapsto(\mathbf{\gamma}, \mathbf{\beta})$. Consequently,
$$
\hat{\mathbf{y}}=G_{\mathbf{\theta}}(\mathbf{x} \mid \mathbf{\gamma}, \mathbf{\beta}), \quad(\mathbf{\gamma}, \mathbf{\beta})=\mathcal{M}(\Psi)
$$
After obtaining $(\mathbf{\gamma}, \mathbf{\beta})$ from conditions, the transformation is carried out by scaling and shifting feature maps of a specific layer:
$$
\operatorname{SFT}(\mathbf{F} \mid \mathbf{\gamma}, \mathbf{\beta})=\mathbf{\gamma} \odot \mathbf{F}+\mathbf{\beta}
$$
where $\mathbf{F}$ denotes the feature maps, whose dimension is the same as $\gamma$ and $\mathbf{\beta}$, and $\odot$ is referred to element-wise multiplication, i.e., Hadamard product. Since the spatial dimensions are preserved, the SFT layer not only performs feature-wise manipulation but also spatial-wise transformation. |
Given the following machine learning model name: BRepNet, provide a description of the model | **BRepNet** is a neural network for CAD applications. It is designed to operate directly on B-rep data structures, avoiding the need to approximate the model as meshes or point clouds. BRepNet defines convolutional kernels with respect to oriented coedges in the data structure. In the neighborhood of each coedge, a small collection of faces, edges and coedges can be identified and patterns in the feature vectors from these entities detected by specific learnable parameters. |
Given the following machine learning model name: RandomRotate, provide a description of the model | **RandomRotate** is a type of image data augmentation where we randomly rotate the image by a degree. |
Given the following machine learning model name: Non-monotonically Triggered ASGD, provide a description of the model | **NT-ASGD**, or **Non-monotonically Triggered ASGD**, is an averaged stochastic gradient descent technique.
In regular ASGD, we take steps identical to [regular SGD](https://paperswithcode.com/method/sgd) but instead of returning the last iterate as the solution, we return $\frac{1}{\left(K-T+1\right)}\sum^{T}\_{i=T}w\_{i}$, where $K$ is the total number of iterations and $T < K$ is a user-specified averaging trigger.
NT-ASGD has a non-monotonic criterion that conservatively triggers the averaging when the validation metric fails to improve for multiple cycles. Given that the choice of triggering is irreversible, this conservatism ensures that the randomness of training does not play a major role in the decision. |
Given the following machine learning model name: PocketNet, provide a description of the model | **PocketNet** is a face recognition model family discovered through [neural architecture search](https://paperswithcode.com/methods/category/neural-architecture-search). The training is based on multi-step knowledge distillation. |
Given the following machine learning model name: Temporal Distribution Characterization, provide a description of the model | **Temporal Distribution Characterization**, or **TDC**, is a module used in the [AdaRNN](https://paperswithcode.com/method/adarnn) architecture to characterize the distributional information in a time series.
Based on the principle of maximum entropy, maximizing the utilization of shared knowledge underlying a times series under temporal covariate shift can be done by finding periods which are most dissimilar to each other, which is also considered as the worst case of temporal covariate shift since the cross-period distributions are the most diverse. TDC achieves this goal for splitting the time-series by solving an optimization problem whose objective can be formulated as:
$$
\max \_{0<K \leq K\_{0}} \max \_{n\_{1}, \cdots, n\_{K}} \frac{1}{K} \sum_{1 \leq i \neq j \leq K} d\left(\mathcal{D}\_{i}, \mathcal{D}\_{j}\right)
$$
$$
\text { s.t. } \forall i, \Delta_{1}<\left|\mathcal{D}\_{i}\right|<\Delta_{2} ; \sum_{i}\left|\mathcal{D}\_{i}\right|=n
$$
where $d$ is a distance metric, $\Delta\_{1}$ and $\Delta\_{2}$ are predefined parameters to avoid trivial solutions (e.g., very small values or very large values may fail to capture the distribution information), and $K\_{0}$ is the hyperparameter to avoid over-splitting. The metric $d(\cdot, \cdot)$ above can be any distance function, e.g., Euclidean or Editing distance, or some distribution-based distance / divergence, like MMD [14] and KL-divergence.
The learning goal of the optimization problem (1) is to maximize the averaged period-wise distribution distances by searching $K$ and the corresponding periods so that the distributions of each period are as diverse as possible and the learned prediction model has better a more generalization ability. |
Given the following machine learning model name: Generative Adversarial Network, provide a description of the model | A **GAN**, or **Generative Adversarial Network**, is a generative model that simultaneously trains
two models: a generative model $G$ that captures the data distribution, and a discriminative model $D$ that estimates the
probability that a sample came from the training data rather than $G$.
The training procedure for $G$ is to maximize the probability of $D$ making
a mistake. This framework corresponds to a minimax two-player game. In the
space of arbitrary functions $G$ and $D$, a unique solution exists, with $G$
recovering the training data distribution and $D$ equal to $\frac{1}{2}$
everywhere. In the case where $G$ and $D$ are defined by multilayer perceptrons,
the entire system can be trained with backpropagation.
(Image Source: [here](http://www.kdnuggets.com/2017/01/generative-adversarial-networks-hot-topic-machine-learning.html)) |
Given the following machine learning model name: Multiplicative Attention, provide a description of the model | **Multiplicative Attention** is an attention mechanism where the alignment score function is calculated as:
$$f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right) = \mathbf{h}\_{i}^{T}\textbf{W}\_{a}\mathbf{s}\_{j}$$
Here $\mathbf{h}$ refers to the hidden states for the encoder/source, and $\mathbf{s}$ is the hidden states for the decoder/target. The function above is thus a type of alignment score function. We can use a matrix of alignment scores to show the correlation between source and target words, as the Figure to the right shows. Within a neural network, once we have the alignment scores, we calculate the final scores using a [softmax](https://paperswithcode.com/method/softmax) function of these alignment scores (ensuring it sums to 1).
Additive and multiplicative attention are similar in complexity, although multiplicative attention is faster and more space-efficient in practice as it can be implemented more efficiently using matrix multiplication. Both variants perform similar for small dimensionality $d_{h}$ of the decoder states, but [additive attention](https://paperswithcode.com/method/additive-attention) performs better for larger dimensions. One way to mitigate this is to scale $f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right)$ by $1/\sqrt{d\_{h}}$ as with [scaled dot-product attention](https://paperswithcode.com/method/scaled). |
Given the following machine learning model name: Polynomial Convolution, provide a description of the model | PolyConv learns continuous distributions as the convolutional filters to share the weights across different vertices of graphs or points of point clouds. |
Given the following machine learning model name: Characterizable Invertible 3x3 Convolution, provide a description of the model | Characterizable Invertible $3\times3$ Convolution |
Given the following machine learning model name: AutoAugment, provide a description of the model | **AutoAugment** is an automated approach to find data augmentation policies from data. It formulates the problem of finding the best augmentation policy as a discrete search problem. It consists of two components: a search algorithm and a search space.
At a high level, the search algorithm (implemented as a controller RNN) samples a data augmentation policy $S$, which has information about what image processing operation to use, the probability of using the operation in each batch, and the magnitude of the operation. The policy $S$ is used to train a neural network with a fixed architecture, whose validation accuracy $R$ is sent back to update the controller. Since $R$ is not differentiable, the controller will be updated by policy gradient methods.
The operations used are from PIL, a popular Python image library: all functions in PIL that accept an image as input and output an image. It additionally uses two other augmentation techniques: [Cutout](https://paperswithcode.com/method/cutout) and SamplePairing. The operations searched over are ShearX/Y, TranslateX/Y, Rotate, AutoContrast, Invert, Equalize, Solarize, Posterize, Contrast, Color, Brightness, Sharpness, Cutout and Sample Pairing. |
Given the following machine learning model name: PonderNet, provide a description of the model | **PonderNet** is an adaptive computation method that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet learns end-to-end the number of computational steps to achieve an effective compromise between training prediction accuracy, computational cost and generalization. |
Given the following machine learning model name: EfficientNet, provide a description of the model | **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use $2^N$ times more computational resources, then we can simply increase the network depth by $\alpha ^ N$, width by $\beta ^ N$, and image size by $\gamma ^ N$, where $\alpha, \beta, \gamma$ are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient $\phi$ to uniformly scales network width, depth, and resolution in a principled way.
The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image.
The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to squeeze-and-excitation blocks.
EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. |
Given the following machine learning model name: Early Learning Regularization, provide a description of the model | |
Given the following machine learning model name: SimCLR, provide a description of the model | **SimCLR** is a framework for contrastive learning of visual representations. It learns representations by maximizing agreement between differently augmented views of the same data example via a contrastive loss in the latent space. It consists of:
- A stochastic data augmentation module that transforms any given data example randomly resulting in two correlated views of the same example, denoted $\mathbf{\tilde{x}\_{i}}$ and $\mathbf{\tilde{x}\_{j}}$, which is considered a positive pair. SimCLR sequentially applies three simple augmentations: random cropping followed by resize back to the original size, random color distortions, and [random Gaussian blur](https://paperswithcode.com/method/random-gaussian-blur). The authors find random crop and color distortion is crucial to achieve good performance.
- A neural network base encoder $f\left(·\right)$ that extracts representation vectors from augmented data examples. The framework allows various choices of the network architecture without any constraints. The authors opt for simplicity and adopt [ResNet](https://paperswithcode.com/method/resnet) to obtain $h\_{i} = f\left(\mathbf{\tilde{x}}\_{i}\right) = \text{ResNet}\left(\mathbf{\tilde{x}}\_{i}\right)$ where $h\_{i} \in \mathbb{R}^{d}$ is the output after the [average pooling](https://paperswithcode.com/method/average-pooling) layer.
- A small neural network projection head $g\left(·\right)$ that maps representations to the space where contrastive loss is applied. Authors use a MLP with one hidden layer to obtain $z\_{i} = g\left(h\_{i}\right) = W^{(2)}\sigma\left(W^{(1)}h\_{i}\right)$ where $\sigma$ is a [ReLU](https://paperswithcode.com/method/relu) nonlinearity. The authors find it beneficial to define the contrastive loss on $z\_{i}$’s rather than $h\_{i}$’s.
- A contrastive loss function defined for a contrastive prediction task. Given a set {$\mathbf{\tilde{x}}\_{k}$} including a positive pair of examples $\mathbf{\tilde{x}}\_{i}$ and $\mathbf{\tilde{x}\_{j}}$ , the contrastive prediction task aims to identify $\mathbf{\tilde{x}}\_{j}$ in {$\mathbf{\tilde{x}}\_{k}$}$\_{k\neq{i}}$ for a given $\mathbf{\tilde{x}}\_{i}$.
A minibatch of $N$ examples is randomly sampled and the contrastive prediction task is defined on pairs of augmented examples derived from the minibatch, resulting in $2N$ data points. Negative examples are not sampled explicitly. Instead, given a positive pair, the other $2(N − 1)$ augmented examples within a minibatch are treated as negative examples. A [NT-Xent](https://paperswithcode.com/method/nt-xent) (the normalized
temperature-scaled cross entropy loss) loss function is used (see components). |
Given the following machine learning model name: GeniePath, provide a description of the model | GeniePath is a scalable approach for learning adaptive receptive fields of neural networks defined on permutation invariant graph data. In GeniePath, we propose an adaptive path layer consists of two complementary functions designed for breadth and depth exploration respectively, where the former learns the importance of different sized neighborhoods, while the latter extracts and filters signals aggregated from neighbors of different hops away.
Description and image from: [GeniePath: Graph Neural Networks with Adaptive Receptive Paths](https://arxiv.org/pdf/1802.00910.pdf) |
Given the following machine learning model name: Dialogue-Adaptive Pre-training Objective, provide a description of the model | **Dialogue-Adaptive Pre-training Objective (DAPO)** is a pre-training objective for dialogue adaptation, which is designed to measure qualities of dialogues from multiple important aspects, like Readability, Consistency and Fluency which have already been focused on by general LM pre-training objectives, and those also significant for assessing dialogues but ignored by general LM pre-training objectives, like Diversity and Specificity. |
Given the following machine learning model name: Convolutional Block Attention Module, provide a description of the model | **Convolutional Block Attention Module (CBAM)** is an attention module for convolutional neural networks. Given an intermediate feature map, the module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement.
Given an intermediate feature map $\mathbf{F} \in \mathbb{R}^{C×H×W}$ as input, CBAM sequentially infers a 1D channel attention map $\mathbf{M}\_{c} \in \mathbb{R}^{C×1×1}$ and a 2D spatial attention map $\mathbf{M}\_{s} \in \mathbb{R}^{1×H×W}$. The overall attention process can be summarized as:
$$ \mathbf{F}' = \mathbf{M}\_{c}\left(\mathbf{F}\right) \otimes \mathbf{F} $$
$$ \mathbf{F}'' = \mathbf{M}\_{s}\left(\mathbf{F'}\right) \otimes \mathbf{F'} $$
During multiplication, the attention values are broadcasted (copied) accordingly: channel attention values are broadcasted along the spatial dimension, and vice versa. $\mathbf{F}''$ is the final refined
output. |
Given the following machine learning model name: Fixup Initialization, provide a description of the model | **FixUp Initialization**, or **Fixed-Update Initialization**, is an initialization method that rescales the standard initialization of [residual branches](https://paperswithcode.com/method/residual-block) by adjusting for the network architecture. Fixup aims to enables training very deep [residual networks](https://paperswithcode.com/method/resnet) stably at a maximal learning rate without [normalization](https://paperswithcode.com/methods/category/normalization).
The steps are as follows:
1. Initialize the classification layer and the last layer of each residual branch to 0.
2. Initialize every other layer using a standard method, e.g. [Kaiming Initialization](https://paperswithcode.com/method/he-initialization), and scale only the weight layers inside residual branches by $L^{\frac{1}{2m-2}}$.
3. Add a scalar multiplier (initialized at 1) in every branch and a scalar bias (initialized at 0) before each [convolution](https://paperswithcode.com/method/convolution), [linear](https://paperswithcode.com/method/linear-layer), and element-wise activation layer. |
Given the following machine learning model name: Adaptive Span Transformer, provide a description of the model | The **Adaptive Attention Span Transformer** is a Transformer that utilises an improvement to the self-attention layer called [adaptive masking](https://paperswithcode.com/method/adaptive-masking) that allows the model to choose its own context size. This results in a network where each attention layer gathers information on their own context. This allows for scaling to input sequences of more than 8k tokens.
Their proposals are based on the observation that, with the dense attention of a traditional [Transformer](https://paperswithcode.com/method/transformer), each attention head shares the same attention span $S$ (attending over the full context). But many attention heads can specialize to more local context (others look at the longer sequence). This motivates the need for a variant of self-attention that allows the model to choose its own context size (adaptive masking - see components). |
Given the following machine learning model name: PointQuad-Transformer, provide a description of the model | **PQ-Transformer**, or **PointQuad-Transformer**, is a [Transformer](https://paperswithcode.com/method/transformer)-based architecture that predicts 3D objects and layouts simultaneously, using point cloud inputs. Unlike existing methods that either estimate layout keypoints or edges, room layouts are directly parameterized as a set of quads. Along with the quad representation, a physical constraint loss function is used that discourages object-layout interference.
Given an input 3D point cloud of $N$ points, the point cloud feature learning backbone extracts $M$ context-aware point features of $\left(3+C\right)$ dimensions, through sampling and grouping. A voting module and a farthest point sampling (FPS) module are used to generate $K\_{1}$ object proposals and $K\_{2}$ quad proposals respectively. Then the proposals are processed by a transformer decoder to further refine proposal features. Through several feedforward layers and non-maximum suppression (NMS), the proposals become the final object bounding boxes and layout quads. |
Given the following machine learning model name: StoGCN, provide a description of the model | StoGCN is a control variate based algorithm which allow sampling an arbitrarily small neighbor size. Presents new theoretical guarantee for the algorithms to converge to a local optimum of GCN. |
Given the following machine learning model name: Early Stopping, provide a description of the model | **Early Stopping** is a regularization technique for deep neural networks that stops training when parameter updates no longer begin to yield improves on a validation set. In essence, we store and update the current best parameters during training, and when parameter updates no longer yield an improvement (after a set number of iterations) we stop training and use the last best parameters. It works as a regularizer by restricting the optimization procedure to a smaller volume of parameter space.
Image Source: [Ramazan Gençay](https://www.researchgate.net/figure/Early-stopping-based-on-cross-validation_fig1_3302948) |
Given the following machine learning model name: MultiGrain, provide a description of the model | **MultiGrain** is a type of image model that learns a single embedding for classes, instances and copies. In other words, it is a convolutional neural network that is suitable for both image classification and instance retrieval. We learn MultiGrain by jointly training an image embedding for multiple tasks. The resulting representation is compact and can outperform narrowly-trained embeddings. The learned embedding output incorporates different levels of granularity. |
Given the following machine learning model name: BIMAN, provide a description of the model | **BIMAN**, or **Bot Identification by commit Message, commit Association, and author Name**, is a technique to detect bots that commit code. It is comprised of three methods that consider independent aspects of the commits made by a particular author: 1) Commit Message: Identify if commit messages are being generated from templates; 2) Commit Association: Predict if an author is a bot using a random forest model, with features related to files and projects associated with the commits as predictors; and 3) Author Name: Match author’s name and email to common bot patterns. |
Given the following machine learning model name: Inception-ResNet-v2-C, provide a description of the model | **Inception-ResNet-v2-C** is an image model block for an 8 x 8 grid used in the [Inception-ResNet-v2](https://paperswithcode.com/method/inception-resnet-v2) architecture. It largely follows the idea of Inception modules - and grouped convolutions - but also includes residual connections. |
Given the following machine learning model name: Parallax, provide a description of the model | **Parallax** is a hybrid parallel method for training large neural networks. Parallax is a framework that optimizes data parallel training by utilizing the sparsity of model parameters. Parallax introduces a hybrid approach that combines Parameter Server and AllReduce architectures to optimize the amount of data transfer according to the sparsity.
Parallax pursues a hybrid approach that uses the Parameter Server architecture for handling sparse variables and the AllReduce architecture for handling dense variables. Moreover, Parallax partitions large sparse variables by a near-optimal number of partitions to maximize parallelism while maintaining low computation and communication overhead. Parallax further optimizes training with local aggregation and smart operation placement to mitigate communication overhead. Graph transformation in Parallax automatically applies all of these optimizations and the data parallel training itself at the framework level to minimize user efforts for writing and optimizing a distributed program by composing low-level primitives. |
Given the following machine learning model name: Pythia, provide a description of the model | **Pythia** is a suite of decoder-only autoregressive language models all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. The model architecture and hyperparameters largely follow GPT-3, with a few notable deviations based on recent advances in best practices for large scale language modeling. |
Given the following machine learning model name: High-Order Consensuses, provide a description of the model | |
Given the following machine learning model name: RPDet, provide a description of the model | **RPDet**, or **RepPoints Detector**, is a anchor-free, two-stage object detection model based on deformable convolutions. [RepPoints](https://paperswithcode.com/method/reppoints) serve as the basic object representation throughout the detection system. Starting from the center points, the first set of RepPoints is obtained via regressing offsets over the center points. The learning of these RepPoints is driven by two objectives: 1) the top-left and bottom-right points distance loss between the induced pseudo box and the ground-truth bounding box; 2) the object recognition loss of the subsequent stage. |
Given the following machine learning model name: PipeDream, provide a description of the model | PipeDream is an asynchronous pipeline parallel strategy for training large neural networks. It adds inter-batch pipelining to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. |
Given the following machine learning model name: E-MBConv, provide a description of the model | |
Given the following machine learning model name: Quasi-Recurrent Neural Network, provide a description of the model | A **QRNN**, or **Quasi-Recurrent Neural Network**, is a type of recurrent neural network that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Due to their increased parallelism, they can be up to 16 times faster at train and test time than [LSTMs](https://paperswithcode.com/method/lstm).
Given an input sequence $\mathbf{X} \in \mathbb{R}^{T\times{n}}$ of $T$ n-dimensional vectors $\mathbf{x}\_{1}, \dots, \mathbf{x}\_{T}$, the convolutional subcomponent of a QRNN performs convolutions in the timestep dimension with a bank of $m$ filters, producing a sequence $\mathbf{Z} \in \mathbb{R}^{T\times{m}}$ of m-dimensional candidate vectors $\mathbf{z}\_{t}$. Masked convolutions are used so filters can not access information from future timesteps (implementing with left padding).
Additional convolutions are applied with separate filter banks to obtain sequences of vectors for the
elementwise gates that are needed for the pooling function. While the candidate vectors are passed
through a $\tanh$ nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a
forget gate $f\_{t}$ and an output gate $o\_{t}$ at each timestep, the full set of computations in the convolutional component is then:
$$ \mathbf{Z} = \tanh\left(\mathbf{W}\_{z} ∗ \mathbf{X}\right) $$
$$ \mathbf{F} = \sigma\left(\mathbf{W}\_{f} ∗ \mathbf{X}\right) $$
$$ \mathbf{O} = \sigma\left(\mathbf{W}\_{o} ∗ \mathbf{X}\right) $$
where $\mathbf{W}\_{z}$, $\mathbf{W}\_{f}$, and $\mathbf{W}\_{o}$, each in $\mathbb{R}^{k×n×m}$, are the convolutional filter banks and ∗ denotes a [masked convolution](https://paperswithcode.com/method/masked-convolution) along the timestep dimension. Dynamic [average pooling](https://paperswithcode.com/method/average-pooling) by Balduzzi & Ghifary (2016) is used, which uses only a forget gate:
$$ \mathbf{h}\_{t} = \mathbf{f}\_{t} \odot{\mathbf{h}\_{t−1}} + \left(1 − \mathbf{f}\_{t}\right) \odot{\mathbf{z}\_{t}} $$
Which is denoted f-pooling. The function may also include an output gate:
$$ \mathbf{c}\_{t} = \mathbf{f}\_{t} \odot{\mathbf{c}\_{t−1}} + \left(1 − \mathbf{f}\_{t}\right) \odot{\mathbf{z}\_{t}} $$
$$ \mathbf{h}\_{t} = \mathbf{o}\_{t} \odot{\mathbf{c}\_{t}} $$
Which is denoted fo-pooling. Or the recurrence relation may include an independent input and forget gate:
$$ \mathbf{c}\_{t} = \mathbf{f}\_{t} \odot{\mathbf{c}\_{t−1}} + \mathbf{i}\_{t}\odot{\mathbf{z}\_{t}} $$
$$ \mathbf{h}\_{t} = \mathbf{o}\_{t} \odot{\mathbf{c}\_{t}} $$
Which is denoted ifo-pooling. In each case $h$ or $c$ is initialized to zero. The recurrent part sof these functions must be calculated for each timestep in the sequence, but parallelism along feature dimensions means evaluating them even over long sequences requires a negligible amount of computation time.
A single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combination of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions. |
Given the following machine learning model name: LV-ViT, provide a description of the model | **LV-ViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that uses token labelling as a training objective. Different from the standard training
objective of ViTs that computes the classification loss on an additional trainable class token, token labelling takes advantage of all the image patch tokens to compute the training loss in a dense manner. Specifically, token labeling reformulates the image classification problem into multiple token-level recognition problems and assigns each patch token with an individual location-specific supervision generated by a machine annotator. |
Given the following machine learning model name: WenLan, provide a description of the model | Proposes a two-tower pre-training model called BriVL within the cross-modal contrastive learning framework. A cross-modal pre-training model is defined based on the image-text retrieval task. The main goal is thus to learn two encoders that can embed image and text samples into the same space for effective image-text retrieval. To enforce such cross-modal embedding learning, we introduce contrastive learning with the InfoNCE loss into the BriVL model. Given text embedding, the learning objective aims to find the best image embedding from a batch of image embeddings. Similarly, for a given image embedding, the learning objective is to find the best text embedding from a batch of text embeddings. The pre-training model learns a cross-modal embedding space by jointly training the image and text encoders to maximize the cosine similarity of the image and text embeddings of the true pair for each sample in the batch while minimizing the cosine similarity of the embeddings of the other incorrect pairs. |
Given the following machine learning model name: SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings, provide a description of the model | The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems.
Approaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets. |
Given the following machine learning model name: Multilingual Universal Sentence Encoder, provide a description of the model | |
Given the following machine learning model name: Cutout, provide a description of the model | **Cutout** is an image augmentation and regularization technique that randomly masks out square regions of input during training. and can be used to improve the robustness and overall performance of convolutional neural networks. The main motivation for cutout comes from the problem of object occlusion, which is commonly encountered in many computer vision tasks, such as object recognition,
tracking, or human pose estimation. By generating new images which simulate occluded examples, we not only better prepare the model for encounters with occlusions in the real world, but the model also learns to take more of the image context into consideration when making decisions |
Given the following machine learning model name: context2vec, provide a description of the model | **context2vec** is an unsupervised model for learning generic context embedding of wide sentential contexts, using a bidirectional [LSTM](https://paperswithcode.com/method/lstm). A large plain text corpora is trained on to learn a neural model that embeds entire sentential contexts and target words in the same low-dimensional space, which
is optimized to reflect inter-dependencies between targets and their entire sentential context as a whole.
In contrast to word2vec that use context modeling mostly internally and considers the target word embeddings as their main output, the focus of context2vec is the context representation. context2vec achieves its objective by assigning similar embeddings to sentential contexts and their associated target words. |
Given the following machine learning model name: PIRL, provide a description of the model | **Pretext-Invariant Representation Learning (PIRL, pronounced as “pearl”)** learns invariant representations based on pretext tasks. PIRL is used with a commonly used pretext task that involves solving [jigsaw](https://paperswithcode.com/method/jigsaw) puzzles. Specifically, PIRL constructs image representations that are similar to the representation of transformed versions of the same image and different from the representations of other images. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.