prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: Mix-FFN, provide a description of the model | **Mix-FFN** is a feedforward layer used in the [SegFormer](https://paperswithcode.com/method/segformer) architecture. [ViT](https://www.paperswithcode.com/method/vision-transformer) uses [positional encoding](https://paperswithcode.com/methods/category/position-embeddings) (PE) to introduce the location information. However, the resolution of $\mathrm{PE}$ is fixed. Therefore, when the test resolution is different from the training one, the positional code needs to be interpolated and this often leads to dropped accuracy. To alleviate this problem, [CPVT](https://www.paperswithcode.com/method/cpvt) uses $3 \times 3$ Conv together with the PE to implement a data-driven PE. The authors of Mix-FFN argue that positional encoding is actually not necessary for semantic segmentation. Instead, they use Mix-FFN which considers the effect of zero padding to leak location information, by directly using a $3 \times 3$ Conv in the feed-forward network (FFN). Mix-FFN can be formulated as:
$$
\mathbf{x}\_{\text {out }}=\operatorname{MLP}\left(\operatorname{GELU}\left(\operatorname{Conv}\_{3 \times 3}\left(\operatorname{MLP}\left(\mathbf{x}\_{i n}\right)\right)\right)\right)+\mathbf{x}\_{i n}
$$
where $\mathbf{x}\_{i n}$ is the feature from a self-attention module. Mix-FFN mixes a $3 \times 3$ convolution and an MLP into each FFN. |
Given the following machine learning model name: Multi-DConv-Head Attention, provide a description of the model | **Multi-DConv-Head Attention**, or **MDHA**, is a type of [Multi-Head Attention](https://paperswithcode.com/method/multi-head-attention) that utilizes [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution) after the multi-head projections. It is used in the [Primer](https://paperswithcode.com/method/primer) [Transformer](https://paperswithcode.com/method/transformer) architecture.
Specifically, 3x1 depthwise convolutions are added after each of the multi-head projections for query $Q$, key $K$ and value $V$ in self-attention. These depthwise convolutions are performed over the spatial dimension of each dense projection’s output. Interestingly, this ordering of pointwise followed by depthwise convolution is the reverse of typical [separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution), which the authors find to be less effective. They also find that wider depthwise convolution and [standard convolution](https://paperswithcode.com/method/convolution) not only do not improve performance, but in several cases hurt it.
MDHA is similar to [Convolutional Attention](https://paperswithcode.com/method/cvt), which uses [separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) instead of depthwise convolution and does not apply convolution operations per attention head as in MDHA. |
Given the following machine learning model name: Lifelong Infinite Mixture, provide a description of the model | **LIMix**, or **Lifelong Infinite Mixture**, is a lifelong learning model which grows a mixture of models to adapt to an increasing number of tasks. LIMix can automatically expand its network architectures or choose an appropriate component to adapt its parameters for learning a new task, while preserving its previously learnt information. Knowledge is incorporated by means of Dirichlet processes by using a gating mechanism which computes the dependence between the knowledge learnt previously and stored in each component, and a new set of data. Besides, a Student model is trained which can accumulate cross-domain representations over time and make quick inferences. |
Given the following machine learning model name: Crossbow, provide a description of the model | **Crossbow** is a single-server multi-GPU system for training deep learning models that enables users to freely choose their preferred batch size—however small—while scaling to multiple GPUs. Crossbow uses many parallel model replicas and avoids reduced statistical efficiency through a new synchronous training method. [SMA](https://paperswithcode.com/method/slime-mould-algorithm-sma), a synchronous variant of model averaging, is used in which replicas independently explore the solution space with gradient descent, but adjust their search synchronously based on the trajectory of a globally-consistent average model. |
Given the following machine learning model name: Alternating Direction Method of Multipliers, provide a description of the model | The **alternating direction method of multipliers** (**ADMM**) is an algorithm that solves convex optimization problems by breaking them into smaller pieces, each of which are then easier to handle. It takes the form of a decomposition-coordination procedure, in which the solutions to small
local subproblems are coordinated to find a solution to a large global problem. ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. It turns out to be equivalent or closely related to many other algorithms
as well, such as Douglas-Rachford splitting from numerical analysis, Spingarn’s method of partial inverses, Dykstra’s alternating projections method, Bregman iterative algorithms for l1 problems in signal processing, proximal methods, and many others.
Text Source: [https://stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf](https://stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf)
Image Source: [here](https://www.slideshare.net/derekcypang/alternating-direction) |
Given the following machine learning model name: Sarsa Lambda, provide a description of the model | **Sarsa_INLINE_MATH_1** extends eligibility-traces to action-value methods. It has the same update rule as for **TD_INLINE_MATH_1** but we use the action-value form of the TD erorr:
$$ \delta\_{t} = R\_{t+1} + \gamma\hat{q}\left(S\_{t+1}, A\_{t+1}, \mathbb{w}\_{t}\right) - \hat{q}\left(S\_{t}, A\_{t}, \mathbb{w}\_{t}\right) $$
and the action-value form of the [eligibility trace](https://paperswithcode.com/method/eligibility-trace):
$$ \mathbb{z}\_{-1} = \mathbb{0} $$
$$ \mathbb{z}\_{t} = \gamma\lambda\mathbb{z}\_{t-1} + \nabla\hat{q}\left(S\_{t}, A\_{t}, \mathbb{w}\_{t} \right), 0 \leq t \leq T$$
Source: Sutton and Barto, Reinforcement Learning, 2nd Edition |
Given the following machine learning model name: PReLU-Net, provide a description of the model | **PReLU-Net** is a type of convolutional neural network that utilises parameterized ReLUs for its activation function. It also uses a robust initialization scheme - afterwards known as [Kaiming Initialization](https://paperswithcode.com/method/he-initialization) - that accounts for non-linear activation functions. |
Given the following machine learning model name: KungFu, provide a description of the model | **KungFu** is a distributed ML library for TensorFlow that is designed to enable adaptive training. KungFu allows users to express high-level Adaptation Policies (APs) that describe how to change hyper- and system parameters during training. APs take real-time monitored metrics (e.g. signal-to-noise ratios and noise scale) as input and trigger control actions (e.g. cluster rescaling or synchronisation strategy updates). For execution, APs are translated into monitoring and control operators, which are embedded in the dataflow graph. APs exploit an efficient asynchronous collective communication layer, which ensures concurrency and consistency
of monitoring and adaptation operations. |
Given the following machine learning model name: Lower Bound on Transmission using Non-Linear Bounding Function in Single Image Dehazing, provide a description of the model | |
Given the following machine learning model name: PermuteFormer, provide a description of the model | **PermuteFormer** is a [Performer](https://paperswithcode.com/method/performer)-based model with relative position encoding that scales linearly on long sequences. PermuteFormer applies position-dependent transformation on queries and keys to encode positional information into the attention module. This transformation is carefully crafted so that the final output of self-attention is not affected by absolute positions of tokens.
Each token’s query / key feature is illustrated as a row of blocks in the figure, and its elements are marked with different colors. The position-aware permutation permutes elements of each token’s query / key feature along the head size dimension in each attention head. Depending on the token’s position, the permutation applied to query / key feature is different. |
Given the following machine learning model name: Hyperboloid Embeddings, provide a description of the model | Hyperboloid Embeddings (HypE) is a novel self-supervised dynamic reasoning framework, that utilizes positive first-order existential queries on a KG to learn representations of its entities and relations as hyperboloids in a Poincaré ball. HypE models the positive first-order queries as geometrical translation (t), intersection ($\cap$), and union ($\cup$). For the problem of KG reasoning in real-world datasets, the proposed HypE model significantly outperforms the state-of-the art results. HypE is also applied to an anomaly detection task on a popular e-commerce website product taxonomy as well as hierarchically organized web articles and demonstrate significant performance improvements compared to existing baseline methods. Finally, HypE embeddings can also be visualized in a Poincaré ball to clearly interpret and comprehend the representation space. |
Given the following machine learning model name: Conditional Instance Normalization, provide a description of the model | **Conditional Instance Normalization** is a normalization technique where all convolutional weights of a style transfer network are shared across many styles. The goal of the procedure is transform
a layer’s activations $x$ into a normalized activation $z$ specific to painting style $s$. Building off
[instance normalization](https://paperswithcode.com/method/instance-normalization), we augment the $\gamma$ and $\beta$ parameters so that they’re $N \times C$ matrices, where $N$ is the number of styles being modeled and $C$ is the number of output feature maps. Conditioning on a style is achieved as follows:
$$ z = \gamma\_{s}\left(\frac{x - \mu}{\sigma}\right) + \beta\_{s}$$
where $\mu$ and $\sigma$ are $x$’s mean and standard deviation taken across spatial axes and $\gamma\_{s}$ and $\beta\_{s}$ are obtained by selecting the row corresponding to $s$ in the $\gamma$ and $\beta$ matrices. One added benefit of this approach is that one can stylize a single image into $N$ painting styles with a single feed forward pass of the network with a batch size of $N$. |
Given the following machine learning model name: Single-Headed Attention, provide a description of the model | **Single-Headed Attention** is a single-headed attention module used in the [SHA-RNN](https://paperswithcode.com/method/sha-rnn) language model. The principle design reasons for single-headedness were simplicity (avoiding running out of memory) and scepticism about the benefits of using multiple heads. |
Given the following machine learning model name: Deep Orthogonal Fusion of Local and Global Features, provide a description of the model | Image Retrieval is a fundamental task of obtaining images similar to the query one from a database. A common image retrieval practice is to firstly retrieve candidate images via similarity search using global image features and then re-rank the candidates by leveraging their
local features. Previous learning-based studies mainly focus on either global or local image representation learning
to tackle the retrieval task. In this paper, we abandon the
two-stage paradigm and seek to design an effective singlestage solution by integrating local and global information
inside images into compact image representations. Specifically, we propose a Deep Orthogonal Local and Global
(DOLG) information fusion framework for end-to-end image retrieval. It attentively extracts representative local information with multi-atrous convolutions and self-attention
at first. Components orthogonal to the global image representation are then extracted from the local information.
At last, the orthogonal components are concatenated with
the global representation as a complementary, and then aggregation is performed to generate the final representation.
The whole framework is end-to-end differentiable and can
be trained with image-level labels. Extensive experimental
results validate the effectiveness of our solution and show
that our model achieves state-of-the-art image retrieval performances on Revisited Oxford and Paris datasets. |
Given the following machine learning model name: VarifocalNet, provide a description of the model | **VarifocalNet** is a method aimed at accurately ranking a huge number of candidate detections in object detection. It consists of a new loss function, named [Varifocal Loss](https://paperswithcode.com/method/varifocal-loss), for training a dense object detector to predict the IACS, and a new efficient star-shaped bounding box feature representation for estimating the IACS and refining coarse bounding boxes. Combining these two new components and a bounding box refinement branch, results in a dense object detector on the [FCOS](https://paperswithcode.com/method/fcos) architecture, what the authors call VarifocalNet or VFNet for short. |
Given the following machine learning model name: IICNet, provide a description of the model | **Invertible Image Conversion Net**, or **IICNet**, is a generic framework for reversible image conversion tasks. Unlike previous encoder-decoder based methods, IICNet maintains a highly invertible structure based on invertible neural networks (INNs) to better preserve the information during conversion. It uses a relation module and a channel squeeze layer to improve the INN nonlinearity to extract cross-image relations and the network flexibility, respectively. |
Given the following machine learning model name: Cross-View Training, provide a description of the model | **Cross View Training**, or **CVT**, is a semi-supervised algorithm for training distributed word representations that makes use of unlabelled and labelled examples.
CVT adds $k$ auxiliary prediction modules to the model, a Bi-[LSTM](https://paperswithcode.com/method/lstm) encoder, which are used when learning on unlabeled examples. A prediction module is usually a small neural network (e.g., a hidden layer followed by a [softmax](https://paperswithcode.com/method/softmax) layer). Each one takes as input an intermediate representation $h^j(x_i)$ produced by the model (e.g., the outputs of one of the LSTMs in a Bi-LSTM model). It outputs a distribution over labels $p\_{j}^{\theta}\left(y\mid{x\_{i}}\right)$.
Each $h^j$ is chosen such that it only uses a part of the input $x_i$; the particular choice can depend on the task and model architecture. The auxiliary prediction modules are only used during training; the test-time prediction come from the primary prediction module that produces $p_\theta$. |
Given the following machine learning model name: SGD with Momentum, provide a description of the model | ### Why SGD with Momentum?
In deep learning, we have used stochastic gradient descent as one of the optimizers because at the end we will find the minimum weight and bias at which the model loss is lowest. In the SGD we have some issues in which the SGD does not work perfectly because in deep learning we got a non-convex cost function graph and if use the simple SGD then it leads to low performance. There are 3 main reasons why it does not work:
<img src="https://www.cs.umd.edu/~tomg/img/landscapes/shortHighRes.png" alt="Non-convex graph" style="width:400px;height :300px;" />
1) We end up in local minima and not able to reach global minima
At the start, we randomly start at some point and we are going to end up at the local minimum and not able to reach the global minimum.
2) Saddle Point will be the stop for reaching global minima
A saddle point is a point where in one direction the surface goes in the upward direction and in another direction it goes downwards. So that the slope is changing very gradually so the speed of changing is going to slow and as result, the training also going to slow.
3) High curvature can be a reason
The larger radius leads to low curvature and vice-versa. It will be difficult to traverse in the large curvature which was generally high in non-convex optimization.
By using the SGD with Momentum optimizer we can overcome the problems like high curvature, consistent gradient, and noisy gradient.
### What is SGD with Momentum?
SGD with Momentum is one of the optimizers which is used to improve the performance of the neural network.
Let's take an example and understand the intuition behind the optimizer suppose we have a ball which is sliding from the start of the slope as it goes the speed of the bowl is increased over time. If we have one point A and we want to reach point B and we don't know in which direction to move but we ask for the 4 points which have already reached point B. If all 4 points are pointing you in the same direction then the confidence of the A is more and it goes in the direction pointed very fast. This is the main concept behind the SGD with Momentum.
<img src="https://cdn-images-1.medium.com/max/1000/1*zNbZqU_uDIV13c9ZCJOEXA.jpeg" alt="Non-convex graph" style="width:400px;height :250px;" />
### How does SGD with Momentum work?
So first to understand the concept of exponentially weighted moving average (EWMA). It was a technique through which try to find the trend in time series data. The formula of the EWMA is :
<img src="https://cdn-images-1.medium.com/max/1000/1*O9Wcq-mbRgNOdRNTivSefw.png" alt="Non-convex graph" style="width:400px;height :100px;" />
In the formula, β represents the weightage that is going to assign to the past values of the gradient. The values of β is from 0 < β < 1. If the value of the beta is 0.5 then it means that the 1/1–0.5 = 2 so it represents that the calculated average was from the previous 2 readings.
The value of Vt depends on β. The higher the value of β the more we try to get an average of more past data and vice-versa. For example, let's take the value of β 0.98 and 0.5 for two different scenarios so if we do 1/1-β then we get 50 and 10 respectively so it was clear that to calculate the average we take past 50 and 10 outcomes respectively for both cases.
Now in SGD with Momentum, we use the same concept of EWMA. Here we introduce the term velocity v which is used to denote the change in the gradient to get to the global minima. The change in the weights is denoted by the formula:
<img src="https://cdn-images-1.medium.com/max/1000/0*i_r3u7LACa6dQyXd" alt="Non-convex graph" style="width:400px;height :100px;" />
the β part of the V formula denotes and is useful to compute the confidence or we can say the past velocity for calculating Vt we have to calculate Vt-1 and for calculating Vt-1 we have to calculate Vt-2 and likewise. So we are using the history of velocity to calculate the momentum and this is the part that provides acceleration to the formula.
<img src="https://cdn-images-1.medium.com/max/1000/1*L5lNKxAHLPYNc6-Zs4Vscw.png" alt="Non-convex graph" style="width:300px;height :100px;" />
Here we have to consider two cases:
1. β=0 then, as per the formula weight updating is going to just work as a Stochastic gradient descent. Here we called β a decaying factor because it is defining the speed of past velocity.
2. β=1 then, there will be no decay. It involves the dynamic equilibrium which is not desired so we generally use the value of β like 0.9,0.99or 0.5 only.
### Advantages of SGD with Momentum :
1. Momentum is faster than stochastic gradient descent the training will be faster than SGD.
2. Local minima can be an escape and reach global minima due to the momentum involved.
<img src="https://cdn-images-1.medium.com/max/1000/1*Nb39bHHUWGXqgisr2WcLGQ.gif" alt="Non-convex graph" style="width:400px;height :300px;" />
Here in the video, we can see that purple is SGD with Momentum and light blue is for SGD the SGD with Momentum can reach global minima whereas SGD is stuck in local minima.
But there is a catch, the momentum itself can be a problem sometimes because of the high momentum after reaching global minima it is still fluctuating and take some time to get stable at global minima. And that kind of behavior leads to time consumption which makes SGD with Momentum slower than other optimization out there but still faster than SGD. |
Given the following machine learning model name: Fast R-CNN, provide a description of the model | **Fast R-CNN** is an object detection model that improves in its predecessor [R-CNN](https://paperswithcode.com/method/r-cnn) in a number of ways. Instead of extracting CNN features independently for each region of interest, Fast R-CNN aggregates them into a single forward pass over the image; i.e. regions of interest from the same image share computation and memory in the forward and backward passes. |
Given the following machine learning model name: RepVGG, provide a description of the model | **RepVGG** is a [VGG](https://paperswithcode.com/method/vgg)-style convolutional architecture. It has the following advantages:
- The model has a VGG-like plain (a.k.a. feed-forward) topology 1 without any branches. I.e., every layer takes
the output of its only preceding layer as input and feeds the output into its only following layer.
- The model’s body uses only 3 × 3 conv and [ReLU](https://paperswithcode.com/method/relu).
- The concrete architecture (including the specific depth and layer widths) is instantiated with no automatic
search, manual refinement, compound scaling, nor other heavy designs. |
Given the following machine learning model name: Stochastic Dueling Network, provide a description of the model | A **Stochastic Dueling Network**, or **SDN**, is an architecture for learning a value function $V$. The SDN learns both $V$ and $Q$ off-policy while maintaining consistency between the two estimates. At each time step it outputs a stochastic estimate of $Q$ and a deterministic estimate of $V$. |
Given the following machine learning model name: Feature Fusion Module v1, provide a description of the model | **Feature Fusion Module v1** is a feature fusion module from the [M2Det](https://paperswithcode.com/method/m2det) object detection model, and feature fusion modules are crucial for constructing the final multi-level feature pyramid. They use [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) layers to compress the channels of the input features and use concatenation operation to aggregate these feature map. FFMv1 takes two feature maps with different scales in backbone as input, it adopts one upsample operation to rescale the deep features to the same scale before the concatenation operation. |
Given the following machine learning model name: PyTorch DDP, provide a description of the model | **PyTorch DDP** (Distributed Data Parallel) is a distributed data parallel implementation for PyTorch. To guarantee mathematical equivalence, all replicas start from the same initial values for model parameters and synchronize gradients to keep parameters consistent across training iterations. To minimize the intrusiveness, the implementation exposes the same forward API as the user model, allowing applications to seamlessly replace subsequent occurrences of a user model with the distributed data parallel model object with no additional code changes. Several techniques are integrated into the design to deliver high-performance training, including bucketing gradients, overlapping communication with computation, and skipping synchronization. |
Given the following machine learning model name: ConvLSTM, provide a description of the model | **ConvLSTM** is a type of recurrent neural network for spatio-temporal prediction that has convolutional structures in both the input-to-state and state-to-state transitions. The ConvLSTM determines the future state of a certain cell in the grid by the inputs and past states of its local neighbors. This can easily be achieved by using a [convolution](https://paperswithcode.com/method/convolution) operator in the state-to-state and input-to-state transitions (see Figure). The key equations of ConvLSTM are shown below, where $∗$ denotes the convolution operator and $\odot$ the Hadamard product:
$$ i\_{t} = \sigma\left(W\_{xi} ∗ X\_{t} + W\_{hi} ∗ H\_{t−1} + W\_{ci} \odot \mathcal{C}\_{t−1} + b\_{i}\right) $$
$$ f\_{t} = \sigma\left(W\_{xf} ∗ X\_{t} + W\_{hf} ∗ H\_{t−1} + W\_{cf} \odot \mathcal{C}\_{t−1} + b\_{f}\right) $$
$$ \mathcal{C}\_{t} = f\_{t} \odot \mathcal{C}\_{t−1} + i\_{t} \odot \text{tanh}\left(W\_{xc} ∗ X\_{t} + W\_{hc} ∗ \mathcal{H}\_{t−1} + b\_{c}\right) $$
$$ o\_{t} = \sigma\left(W\_{xo} ∗ X\_{t} + W\_{ho} ∗ \mathcal{H}\_{t−1} + W\_{co} \odot \mathcal{C}\_{t} + b\_{o}\right) $$
$$ \mathcal{H}\_{t} = o\_{t} \odot \text{tanh}\left(C\_{t}\right) $$
If we view the states as the hidden representations of moving objects, a ConvLSTM with a larger transitional kernel should be able to capture faster motions while one with a smaller kernel can capture slower motions.
To ensure that the states have the same number of rows and same number of columns as the inputs, padding is needed before applying the convolution operation. Here, padding of the hidden states on the boundary points can be viewed as using the state of the outside world for calculation. Usually, before the first input comes, we initialize all the states of the [LSTM](https://paperswithcode.com/method/lstm) to zero which corresponds to "total ignorance" of the future. |
Given the following machine learning model name: Adabelief, provide a description of the model | |
Given the following machine learning model name: Label Smoothing, provide a description of the model | **Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\log{p}\left(y\mid{x}\right)$ directly can be harmful. Assume for a small constant $\epsilon$, the training set label $y$ is correct with probability $1-\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\frac{\epsilon}{k-1}$ and $1-\epsilon$ respectively.
Source: Deep Learning, Goodfellow et al
Image Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629) |
Given the following machine learning model name: Temporaral Difference Network, provide a description of the model | **TDN**, or **Temporaral Difference Network**, is an action recognition model that aims to capture multi-scale temporal information. To fully capture temporal information over the entire video, the TDN is established with a two-level difference modeling paradigm. Specifically, for local motion modeling, temporal difference over consecutive frames is used to supply 2D CNNs with finer motion pattern, while for global motion modeling, temporal difference across segments is incorporated to capture long-range structure for motion feature excitation. |
Given the following machine learning model name: Global second-order pooling convolutional networks, provide a description of the model | A Gsop block has a squeeze module and an excitation module, and uses a second-order pooling to model high-order statistics while gathering global information.
In the squeeze module, a GSoP block firstly reduces the number of channels from $c$ to $c'$ ($c' < c$) using a $1 \times 1$ convolution, then computes a $c' \times c'$ covariance matrix for the different channels to obtain their correlation. Next, row-wise normalization is performed on the covariance matrix. Each $(i, j)$ in the normalized covariance matrix explicitly relates channel $i$ to channel $j$.
In the excitation module, a GSoP block performs row-wise convolution to maintain structural information and output a vector. Then a fully-connected layer and a sigmoid function are applied to get a $c$-dimensional attention vector. Finally, it multiplies the input features by the attention vector, as in an SE block. A GSoP block can be formulated as:
\begin{align}
s = F_\text{gsop}(X, \theta) & = \sigma (W \text{RC}(\text{Cov}(\text{Conv}(X))))
\end{align}
\begin{align}
Y & = s X
\end{align}
Here, $\text{Conv}(\cdot)$ reduces the number of channels,
$\text{Cov}(\cdot)$ computes the covariance matrix and
$\text{RC}(\cdot)$ means row-wise convolution. |
Given the following machine learning model name: Domain Adaptive Ensemble Learning, provide a description of the model | **Domain Adaptive Ensemble Learning**, or **DAEL**, is an architecture for domain adaptation. The model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain and a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. For unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo-label to supervise the ensemble learning. |
Given the following machine learning model name: uNetXST, provide a description of the model | uNet neural network architecture which takes multiple (X) tensors as input and contains [Spatial Transformer](https://paperswithcode.com/method/spatial-transformer) units (ST) |
Given the following machine learning model name: Prime Dilated Convolution, provide a description of the model | |
Given the following machine learning model name: Adaptive Hybrid Activation Function, provide a description of the model | Trainable activation function as a sigmoid-based generalization of ReLU, Swish and SiLU. |
Given the following machine learning model name: CP with N3 Regularizer and Relation Prediction, provide a description of the model | CP with N3 Regularizer and Relation Prediction |
Given the following machine learning model name: In-Place Activated Batch Normalization, provide a description of the model | **In-Place Activated Batch Normalization**, or **InPlace-ABN**, substitutes the conventionally used succession of [BatchNorm](https://paperswithcode.com/method/batch-normalization) + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. It approximately halves the memory requirements during training of modern deep learning models. |
Given the following machine learning model name: MetaFormer, provide a description of the model | MetaFormer is a general architecture abstracted from Transformers by not specifying the token mixer. |
Given the following machine learning model name: Adaptive NMS, provide a description of the model | **Adaptive Non-Maximum Suppression** is a non-maximum suppression algorithm that applies a dynamic suppression threshold to an instance according to the target density. The motivation is to find an NMS algorithm that works well for pedestrian detection in a crowd. Intuitively, a high NMS threshold keeps more crowded instances while a low NMS threshold wipes out more false positives. The adaptive-NMS thus applies a dynamic suppression strategy, where the threshold rises as instances gather and occlude each other and decays when instances appear separately. To this end, an auxiliary and learnable sub-network is designed to predict the adaptive NMS threshold for each instance. |
Given the following machine learning model name: FRILL, provide a description of the model | **FRILL** is a non-semantic speech embedding model trained via knowledge distillation that is fast enough to be run in real-time on a mobile device. The fastest model runs at 0.9 ms, which is 300x faster than TRILL and 25x faster than TRILL-distilled. |
Given the following machine learning model name: InterBERT, provide a description of the model | InterBERT aims to model interaction between information flows pertaining to different modalities. This new architecture builds multi-modal interaction and preserves the independence of single modal representation. InterBERT is built with an image embedding layer, a text embedding layer, a single-stream interaction module, and a two stream extraction module. The model is pre-trained with three tasks: 1) masked segment modeling, 2) masked region modeling, and 3) image-text matching. |
Given the following machine learning model name: Magnification Prior Contrastive Similarity, provide a description of the model | Self-supervised pre-training method to learn efficient representations without labels on histopathology medical images utilizing magnification factors. |
Given the following machine learning model name: First Integer Neighbor Clustering Hierarchy (FINCH)), provide a description of the model | FINCH is a parameter-free fast and scalable clustering algorithm. it stands out for its speed and clustering quality. |
Given the following machine learning model name: Movement Pruning, provide a description of the model | **Movement Pruning** is a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. Magnitude pruning can be seen as utilizing zeroth-order information (absolute value) of the running model. In contrast, movement pruning methods are where importance is derived from first-order information. Intuitively, instead of selecting weights that are far from zero, we retain connections that are moving away from zero during the training process. |
Given the following machine learning model name: Slanted Triangular Learning Rates, provide a description of the model | **Slanted Triangular Learning Rates (STLR)** is a learning rate schedule which first linearly increases the learning rate and then linearly decays it, which can be seen in Figure to the right. It is a modification of Triangular Learning Rates, with a short increase and a long decay period. |
Given the following machine learning model name: Network Dissection, provide a description of the model | **Network Dissection** is an interpretability method for [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks) that evaluates the alignment between individual hidden units and a set of visual semantic concepts. By identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, and colors.
The measurement of interpretability proceeds in three steps:
- Identify a broad set of human-labeled visual concepts.
- Gather the response of the hidden variables to known concepts.
- Quantify alignment of hidden variable−concept pairs. |
Given the following machine learning model name: Dense Prediction Transformer, provide a description of the model | **Dense Prediction Transformers** (DPT) are a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) for dense prediction tasks.
The input image is transformed into tokens (orange) either by extracting non-overlapping patches followed by a linear projection of their flattened representation (DPT-Base and DPT-Large) or by applying a [ResNet](https://paperswithcode.com/method/resnet)-50 feature extractor (DPT-Hybrid). The image embedding is augmented with a positional embedding and a patch-independent readout token (red) is added. The tokens are passed through multiple [transformer](https://paperswithcode.com/method/transformer) stages. The tokens are reassembled from different stages into an image-like representation at multiple resolutions (green). Fusion modules (purple) progressively fuse and upsample the representations to generate a fine-grained prediction. |
Given the following machine learning model name: Adversarial Latent Autoencoder, provide a description of the model | **ALAE**, or **Adversarial Latent Autoencoder**, is a type of autoencoder that attempts to overcome some of the limitations of[ generative adversarial networks](https://paperswithcode.com/paper/generative-adversarial-networks). The architecture allows the latent distribution to be learned from data to address entanglement (A). The output data distribution is learned with an adversarial strategy (B). Thus, we retain the generative properties of GANs, as well as the ability to build on the recent advances in this area. For instance, we can include independent sources of stochasticity, which have proven essential for generating image details, or can leverage recent improvements on GAN loss functions, regularization, and hyperparameters tuning. Finally, to implement (A) and (B), AE reciprocity is imposed in the latent space (C). Therefore, we can avoid using reconstruction losses based on simple $\mathcal{l}\){2}$ norm that operates in data space, where they are often suboptimal, like for the image space. Since it works on the latent space, rather than autoencoding the data space, the approach is named Adversarial Latent Autoencoder (ALAE). |
Given the following machine learning model name: Associative LSTM, provide a description of the model | An **Associative LSTM** combines an [LSTM](https://paperswithcode.com/method/lstm) with ideas from Holographic Reduced Representations (HRRs) to enable key-value storage of data. HRRs use a “binding” operator to implement key-value
binding between two vectors (the key and its associated content). They natively implement associative arrays; as a byproduct, they can also easily implement stacks, queues, or lists. |
Given the following machine learning model name: Feature Selection, provide a description of the model | Feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. |
Given the following machine learning model name: FreeAnchor, provide a description of the model | **FreeAnchor** is an anchor supervision method for object detection. Many CNN-based object detectors assign anchors for ground-truth objects under the restriction of object-anchor Intersection-over-Unit (IoU). In contrast, FreeAnchor is a learning-to-match approach that breaks the IoU restriction, allowing objects to match anchors in a flexible manner. It updates hand-crafted anchor assignment to free anchor matching by formulating detector training as a maximum likelihood estimation (MLE) procedure. FreeAnchor targets at learning features which best explain a class of objects in terms of both classification and localization. |
Given the following machine learning model name: Gaussian Affinity, provide a description of the model | **Gaussian Affinity** is a type of affinity or self-similarity function between two points $\mathbb{x\_{i}}$ and $\mathbb{x\_{j}}$ that uses a Gaussian function:
$$ f\left(\mathbb{x\_{i}}, \mathbb{x\_{j}}\right) = e^{\mathbb{x^{T}\_{i}}\mathbb{x\_{j}}} $$
Here $\mathbb{x^{T}\_{i}}\mathbb{x\_{j}}$ is dot-product similarity. |
Given the following machine learning model name: CrossTransformers, provide a description of the model | CrossTransformers is a Transformer-based neural network architecture which can take a small number of labeled images and an unlabeled query, find coarse spatial correspondence between the query and the labeled images, and then infer class membership by computing distances between spatially-corresponding features. |
Given the following machine learning model name: Deep Voice 3, provide a description of the model | **Deep Voice 3 (DV3)** is a fully-convolutional attention-based neural text-to-speech system. The Deep Voice 3 architecture consists of three components:
- Encoder: A fully-convolutional encoder, which converts textual features to an internal
learned representation.
- Decoder: A fully-convolutional causal decoder, which decodes the learned representation
with a multi-hop convolutional attention mechanism into a low-dimensional audio representation (mel-scale spectrograms) in an autoregressive manner.
- Converter: A fully-convolutional post-processing network, which predicts final vocoder
parameters (depending on the vocoder choice) from the decoder hidden states. Unlike the
decoder, the converter is non-causal and can thus depend on future context information.
The overall objective function to be optimized is a linear combination of the losses from the decoder and the converter. The authors separate decoder and converter and apply multi-task training, because it makes attention learning easier in practice. To be specific, the loss for mel-spectrogram prediction guides training of the attention mechanism, because the attention is trained with the gradients from mel-spectrogram prediction besides vocoder parameter prediction. |
Given the following machine learning model name: Randomized Adversarial Solarization, provide a description of the model | Attack on image classifiers by a image solarization through greedy random search. |
Given the following machine learning model name: High-resolution input, provide a description of the model | |
Given the following machine learning model name: Mesh-TensorFlow, provide a description of the model | **Mesh-TensorFlow** is a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor dimensions to be split across any dimensions of a multi-dimensional mesh of processors. A MeshTensorFlow graph compiles into a SPMD program consisting of parallel operations coupled with collective communication primitives such as Allreduce. |
Given the following machine learning model name: StereoLayers, provide a description of the model | |
Given the following machine learning model name: ManifoldPlus, provide a description of the model | **ManifoldPlus** is a method for robust and scalable conversion of triangle soups to watertight manifolds. It extracts exterior faces between occupied voxels and empty voxels, and uses a projection based optimization method to accurately recover a watertight manifold that resembles the reference mesh. It does not rely on face normals of the input triangle soups and can accurately recover zero-volume structures. For scalability, it employs an adaptive Gauss-Seidel method for shape optimization, in which each step is an easy-to-solve convex problem. |
Given the following machine learning model name: Hard Sigmoid, provide a description of the model | The **Hard Sigmoid** is an activation function used for neural networks of the form:
$$f\left(x\right) = \max\left(0, \min\left(1,\frac{\left(x+1\right)}{2}\right)\right)$$
Image Source: [Rinat Maksutov](https://towardsdatascience.com/deep-study-of-a-not-very-deep-neural-network-part-2-activation-functions-fd9bd8d406fc) |
Given the following machine learning model name: S-shaped ReLU, provide a description of the model | The **S-shaped Rectified Linear Unit**, or **SReLU**, is an activation function for neural networks. It learns both convex and non-convex functions, imitating the multiple function forms given by the two fundamental laws, namely the Webner-Fechner law and the Stevens law, in psychophysics and neural sciences. Specifically, SReLU consists of three piecewise linear functions, which are formulated by four learnable parameters.
The SReLU is defined as a mapping:
$$ f\left(x\right) = t\_{i}^{r} + a^{r}\_{i}\left(x\_{i}-t^{r}\_{i}\right) \text{ if } x\_{i} \geq t^{r}\_{i} $$
$$ f\left(x\right) = x\_{i} \text{ if } t^{r}\_{i} > x > t\_{i}^{l}$$
$$ f\left(x\right) = t\_{i}^{l} + a^{l}\_{i}\left(x\_{i}-t^{l}\_{i}\right) \text{ if } x\_{i} \leq t^{l}\_{i} $$
where $t^{l}\_{i}$, $t^{r}\_{i}$ and $a^{l}\_{i}$ are learnable parameters of the network $i$ and indicates that the SReLU can differ in different channels. The parameter $a^{r}\_{i}$ represents the slope of the right line with input above a set threshold. $t^{r}\_{i}$ and $t^{l}\_{i}$ are thresholds in positive and negative directions respectively.
Source: [Activation Functions](https://arxiv.org/pdf/1811.03378.pdf) |
Given the following machine learning model name: Inception-v4, provide a description of the model | **Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3). |
Given the following machine learning model name: Talking-Heads Attention, provide a description of the model | **Talking-Heads Attention** is a variation on [multi-head attention](https://paperswithcode.com/method/multi-head-attention) which includes linear projections across the attention-heads dimension, immediately before and after the [softmax](https://paperswithcode.com/method/softmax) operation. In [multi-head attention](https://paperswithcode.com/method/multi-head-attention), the different attention heads perform separate computations, which are then summed at the end. Talking-Heads Attention breaks that separation. Two additional learned linear projections are inserted, $P\_{l}$ and $P\_{w}$, which transform the attention-logits and the attention weights respectively, moving information across attention heads. Instead of one "heads" dimension $h$ across the whole computation, we now have three separate heads dimensions: $h\_{k}$, $h$, and $h\_{v}$, which can optionally differ in size (number of "heads"). $h\_{k}$ refers to the number of attention heads for the keys and the queries. $h$ refers to the number of attention heads for the logits and the weights, and $h\_{v}$ refers to the number of attention heads for the values. |
Given the following machine learning model name: Probabilistically Masked Language Model, provide a description of the model | **Probabilistically Masked Language Model**, or **PMLM**, is a type of language model that utilizes a probabilistic masking scheme, aiming to bridge the gap between masked and autoregressive language models. The basic idea behind the connection of two categories of models is similar to MADE by Germain et al (2015). PMLM is a masked language model with a probabilistic masking scheme, which defines the way sequences are masked by following a probabilistic distribution. The authors employ a simple uniform distribution of the masking ratio and name the model as u-PMLM. |
Given the following machine learning model name: Bidirectional LSTM, provide a description of the model | A **Bidirectional LSTM**, or **biLSTM**, is a sequence processing model that consists of two LSTMs: one taking the input in a forward direction, and the other in a backwards direction. BiLSTMs effectively increase the amount of information available to the network, improving the context available to the algorithm (e.g. knowing what words immediately follow *and* precede a word in a sentence).
Image Source: Modelling Radiological Language with Bidirectional Long Short-Term Memory Networks, Cornegruta et al |
Given the following machine learning model name: Grouped Convolution, provide a description of the model | A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it. |
Given the following machine learning model name: HITNet, provide a description of the model | **HITNet** is a framework for neural network based depth estimation which overcomes the computational disadvantages of operating on a 3D volume by integrating image warping, spatial propagation and a fast high resolution initialization step into the network architecture, while keeping the flexibility of a learned representation by allowing features to flow through the network. The main idea of the approach is to represent image tiles as planar patches which have a learned compact feature descriptor attached to them. The basic principle of the approach is to fuse information from the high resolution initialization and the current hypotheses using spatial propagation. The propagation is implemented via a [convolutional neural network](https://paperswithcode.com/methods/category/convolutional-neural-networks) module that updates the estimate of the planar patches and their attached features.
In order for the network to iteratively increase the accuracy of the disparity predictions, the network is provided a local cost volume in a narrow band (±1 disparity) around the planar patch using in-network image warping allowing the network to minimize image dissimilarity. To reconstruct fine details while also capturing large texture-less areas we start at low resolution and hierarchically upsample predictions to higher resolution. A critical feature of the architecture is that at each resolution, matches from the initialization module are provided to facilitate recovery of thin structures that cannot be represented at low resolution. |
Given the following machine learning model name: Discrete Cosine Transform, provide a description of the model | **Discrete Cosine Transform (DCT)** is an orthogonal transformation method that decomposes an
image to its spatial frequency spectrum. It expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. It is used a lot in compression tasks, e..g image compression where for example high-frequency components can be discarded. It is a type of Fourier-related Transform, similar to discrete fourier transforms (DFTs), but only using real numbers.
Image Credit: [Wikipedia](https://en.wikipedia.org/wiki/Discrete_cosine_transform#/media/File:Example_dft_dct.svg) |
Given the following machine learning model name: I-BERT, provide a description of the model | **I-BERT** is a quantized version of [BERT](https://paperswithcode.com/method/bert) that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer only approximation methods for nonlinear operations, e.g., [GELU](https://paperswithcode.com/method/gelu), [Softmax](https://paperswithcode.com/method/softmax), and [Layer Normalization](https://paperswithcode.com/method/layer-normalization), it performs an end-to-end integer-only [BERT](https://paperswithcode.com/method/bert) inference without any floating point calculation.
In particular, GELU and Softmax are approximated with lightweight second-order polynomials, which can be evaluated with integer-only arithmetic. For LayerNorm, integer-only computation is performed by leveraging a known algorithm for integer calculation of
square root. |
Given the following machine learning model name: FuseFormer Block, provide a description of the model | A **FuseFormer block** is used in the [FuseFormer](https://paperswithcode.com/method/fuseformer) model for video inpainting. It is the same to standard [Transformer](https://paperswithcode.com/method/transformer) block except that feed forward network is replaced with a Fusion Feed Forward Network (F3N). F3N brings no extra parameter into the standard feed forward net and the difference is that F3N inserts a soft-split and a soft composite operation between the two layer of MLPs. |
Given the following machine learning model name: ZeRO-Infinity, provide a description of the model | **ZeRO-Infinity** is a sharded data parallel system that extends [ZeRO](https://paperswithcode.com/method/zero) with new innovations in heterogeneous memory access called the infinity offload engine. This allows ZeRO-Infinity to support massive model sizes on limited GPU resources by exploiting CPU and NVMe memory simultaneously. In addition, ZeRO-Infinity also introduces a novel GPU memory optimization technique called memory-centric tiling to support extremely large individual layers that would otherwise not fit in GPU memory even one layer at a time. |
Given the following machine learning model name: PP-YOLO, provide a description of the model | **PP-YOLO** is an object detector based on [YOLOv3](https://paperswithcode.com/method/yolov3). It mainly tries to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Some of these changes include:
- Changing the [DarkNet-53](https://paperswithcode.com/method/darknet-53) backbone with ResNet50-vd. Some of the convolutional layers in ResNet50-vd are also replaced with [deformable convolutional layers](https://paperswithcode.com/method/deformable-convolution).
- A larger batch size is used - changing from 64 to 192.
- An exponentially moving average is used for the parameters.
- [DropBlock](https://paperswithcode.com/method/dropblock) is applied to the [FPN](https://paperswithcode.com/method/fpn).
- An IoU loss is used.
- An IoU prediction branch is added to measure the accuracy of localization.
- [Grid Sensitive](https://paperswithcode.com/method/grid-sensitive) is used, similar to [YOLOv4](https://paperswithcode.com/method/yolov4).
- [Matrix NMS](https://paperswithcode.com/method/matrix-nms) is used.
- [CoordConv](https://paperswithcode.com/method/coordconv) is used for the [FPN](https://paperswithcode.com/method/fpn), replacing the 1x1 convolution layer, and also the first convolution layer in the detection head.
- [Spatial Pyramid Pooling](https://paperswithcode.com/method/spatial-pyramid-pooling) is used for the top feature map. |
Given the following machine learning model name: Neural adjoint method, provide a description of the model | The NA method can be divided into two steps: (i) Training a neural network approximation of f , and (ii) inference of xˆ. Step (i) is conventional and involves training a generic neural network on a dataset
ˆ
of input/output pairs from the simulator, denoted D, resulting in f, an approximation of the forward ˆ
model. This is illustrated in the left inset of Fig 1. In step (ii), our goal is to use ∂f/∂x to help us gradually adjust x so that we achieve a desired output of the forward model, y. This is similar to many classical inverse modeling approaches, such as the popular Adjoint method [8, 9]. For many practical
ˆ
expression for the simulator, from which it is trivial to compute ∂f/∂x, and furthermore, we can use modern deep learning software packages to efficiently estimate gradients, given a loss function L.
More formally, let y be our target output, and let xˆi be our current estimate of the solution, where i indexes each solution we obtain in an iterative gradient-based estimation procedure. Then we compute xˆi+1 with
inverse problems, however, obtaining ∂f/∂x requires significant expertise and/or effort, making these approaches challenging. Crucially, fˆ from step (i) provides us with a closed-form differentiable |
Given the following machine learning model name: XGPT, provide a description of the model | XGPT is a method of cross-modal generative pre-training for image captioning designed to pre-train text-to-image caption generators through three novel generation tasks, including image-conditioned masked language modeling (IMLM), image-conditioned denoising autoencoding (IDA), and text-conditioned image feature generation (TIGF). The pre-trained XGPT can be fine-tuned without any task-specific architecture modifications and build strong image captioning models. |
Given the following machine learning model name: Adversarial Model Perturbation, provide a description of the model | Based on the understanding that the flat local minima of the empirical risk cause the model to generalize better. Adversarial Model Perturbation (AMP) improves generalization via minimizing the **AMP loss**, which is obtained from the empirical risk by applying the **worst** norm-bounded perturbation on each point in the parameter space. |
Given the following machine learning model name: UCNet, provide a description of the model | **UCNet** is a probabilistic framework for RGB-D Saliency Detection that employs uncertainty by learning from the data labelling process. It utilizes conditional variational autoencoders to model human annotation uncertainty and generate multiple saliency maps for each input image by sampling in the latent space. |
Given the following machine learning model name: Depthwise Fire Module, provide a description of the model | A **Depthwise Fire Module** is a modification of a [Fire Module](https://paperswithcode.com/method/fire-module) with depthwise separable convolutions to improve the inference time performance. It is used in the [CornerNet](https://paperswithcode.com/method/cornernet)-Lite architecture for object detection. |
Given the following machine learning model name: Deep-CAPTCHA, provide a description of the model | |
Given the following machine learning model name: AugMix, provide a description of the model | AugMix mixes augmented images through linear interpolations. Consequently it is like [Mixup](https://paperswithcode.com/method/mixup) but instead mixes augmented versions of the same image. |
Given the following machine learning model name: Shuffle Transformer, provide a description of the model | The **Shuffle Transformer Block** consists of the Shuffle Multi-Head Self-Attention module (ShuffleMHSA), the Neighbor-Window Connection module (NWC), and the MLP module. To introduce cross-window connections while maintaining the efficient computation of non-overlapping windows, a strategy which alternates between WMSA and Shuffle-WMSA in consecutive Shuffle Transformer blocks is proposed. The first window-based transformer block uses regular window partition strategy and the second window-based transformer block uses window-based selfattention with spatial shuffle. Besides, the Neighbor-Window Connection moduel (NWC) is added into each block for enhancing connections among neighborhood windows. Thus the proposed shuffle transformer block could build rich cross-window connections and augments representation. Finally, the consecutive Shuffle Transformer blocks are computed as:
$$ x^{l}=\mathbf{W M S A}\left(\mathbf{B N}\left(z^{l-1}\right)\right)+z^{l-1} $$
$$ y^{l}=\mathbf{N W C}\left(x^{l}\right)+x^{l} $$
$$ z^{l}=\mathbf{M L P}\left(\mathbf{B N}\left(y^{l}\right)\right)+y^{l} $$
$$ x^{l+1}=\mathbf{S h u f f l e - W M S A}\left(\mathbf{B N}\left(z^{l}\right)\right)+z^{l} $$
$$ y^{l+1}=\mathbf{N W C}\left(x^{l+1}\right)+x^{l+1} $$
$$ z^{l+1}=\mathbf{M L P}\left(\mathbf{B N}\left(y^{l+1}\right)\right)+y^{l+1} $$
where $x^l$, $y^l$ and $z^l$ denote the output features of the (Shuffle-)WMSA module, the Neighbor-Window Connection module and the MLP module for block $l$, respectively; WMSA and Shuffle-WMSA denote
window-based multi-head self-attention without/with spatial shuffle, respectively. |
Given the following machine learning model name: MuVER, provide a description of the model | **Multi-View Entity Representations**, or **MuVER**, is an approach for entity retrieval that constructs multi-view representations for entity descriptions and approximates the optimal view for mentions via a heuristic searching method. It matches a mention to the appropriate entity by comparing it with entity descriptions. Motivated by the fact that mentions with different contexts correspond to different parts in descriptions, multi-view representations are constructed for each description. Specifically, we segment a description into several sentences. We refer to each sentence as a view $v$, which contains partial information, to form a view set $\mathcal{V}$ of the entity $e$. The Figure illustrates an example that constructs a view set $\mathcal{V}$ for “Kobe Bryant”. |
Given the following machine learning model name: Aging Evolution, provide a description of the model | **Aging Evolution**, or **Regularized Evolution**, is an evolutionary algorithm for [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). Whereas in tournament selection, the best architectures are kept, in aging evolution we associate each genotype with an age, and bias the tournament selection to choose
the younger genotypes. In the context of architecture search, aging evolution allows us to explore the search space more, instead of zooming in on good models too early, as non-aging evolution would. |
Given the following machine learning model name: Meta Pseudo Labels, provide a description of the model | **Meta Pseudo Labels** is a semi-supervised learning method that uses a teacher network to generate pseudo labels on unlabeled data to teach a student network. The teacher receives feedback from the student to inform the teacher to generate better pseudo labels. This feedback signal is used as a reward to train the teacher throughout the course of the student’s learning. |
Given the following machine learning model name: Supervised Contrastive Loss, provide a description of the model | **Supervised Contrastive Loss** is an alternative loss function to cross entropy that the authors argue can leverage label information more effectively. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes.
$$
\mathcal{L}^{sup}=\sum_{i=1}^{2N}\mathcal{L}_i^{sup}
\label{eqn:total_supervised_loss}
$$
$$
\mathcal{L}\_i^{sup}=\frac{-1}{2N\_{\boldsymbol{\tilde{y}}\_i}-1}\sum\_{j=1}^{2N}\mathbf{1}\_{i\neq j}\cdot\mathbf{1}\_{\boldsymbol{\tilde{y}}\_i=\boldsymbol{\tilde{y}}_j}\cdot\log{\frac{\exp{\left(\boldsymbol{z}\_i\cdot\boldsymbol{z}\_j/\tau\right)}}{\sum\_{k=1}^{2N}\mathbf{1}\_{i\neq k}\cdot\exp{\left(\boldsymbol{z}\_i\cdot\boldsymbol{z}\_k/\tau\right)}}}
$$
where $N_{\boldsymbol{\tilde{y}}_i}$ is the total number of images in the minibatch that have the same label, $\boldsymbol{\tilde{y}}_i$, as the anchor, $i$. This loss has important properties well suited for supervised learning: (a) generalization to an arbitrary number of positives, (b) contrastive power increases with more negatives. |
Given the following machine learning model name: Adaptive Bins, provide a description of the model | |
Given the following machine learning model name: Routing Attention, provide a description of the model | **Routed Attention** is an attention pattern proposed as part of the [Routing Transformer](https://paperswithcode.com/method/routing-transformer) architecture. Each attention module
considers a clustering of the space: the current timestep only attends to context belonging to the same cluster. In other word, the current time-step query is routed to a limited number of context through its cluster assignment. This can be contrasted with [strided](https://paperswithcode.com/method/strided-attention) attention patterns and those proposed with the [Sparse Transformer](https://paperswithcode.com/method/sparse-transformer).
In the image to the right, the rows represent the outputs while the columns represent the inputs. The different colors represent cluster memberships for the output token. |
Given the following machine learning model name: Stochastically Scaling Features and Gradients Regularization, provide a description of the model | Please enter a description about the method here |
Given the following machine learning model name: PIoU Loss, provide a description of the model | **PIoU Loss** is a loss function for oriented object detection which is formulated to exploit both the angle and IoU for accurate oriented bounding box regression. The PIoU loss is derived from IoU metric with a pixel-wise form. |
Given the following machine learning model name: Self-Attention GAN, provide a description of the model | The **Self-Attention Generative Adversarial Network**, or **SAGAN**, allows for attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. |
Given the following machine learning model name: Inverted Bottleneck BERT, provide a description of the model | **IB-BERT**, or **Inverted Bottleneck BERT**, is a [BERT](https://paperswithcode.com/method/bert) variant that uses an [inverted bottleneck](https://paperswithcode.com/method/inverted-residual-block) structure. It is used as a teacher network to train the [MobileBERT](https://paperswithcode.com/method/mobilebert) models. |
Given the following machine learning model name: Depthwise Dilated Separable Convolution, provide a description of the model | A **Depthwise Dilated Separable Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) that combines [depthwise separability](https://paperswithcode.com/method/depthwise-separable-convolution) with the use of [dilated convolutions](https://paperswithcode.com/method/dilated-convolution). |
Given the following machine learning model name: HyperDenseNet, provide a description of the model | Recently, [dense connections](https://paperswithcode.com/method/dense-connections) have attracted substantial attention in computer vision because they facilitate gradient flow and implicit deep supervision during training. Particularly, [DenseNet](https://paperswithcode.com/method/densenet) that connects each layer to every other layer in a feed-forward fashion and has shown impressive performances in natural image classification tasks. We propose HyperDenseNet, a 3-D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path but also between those across different paths. This contrasts with the existing multi-modal CNN approaches, in which modeling several modalities relies entirely on a single joint layer (or level of abstraction) for fusion, typically either at the input or at the output of the network. Therefore, the proposed network has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation. We report extensive evaluations over two different and highly competitive multi-modal brain tissue segmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing on six month infant data and the latter on adult images. HyperDenseNet yielded significant improvements over many state-of-the-art segmentation networks, ranking at the top on both benchmarks. We further provide a comprehensive experimental analysis of features re-use, which confirms the importance of hyper-dense connections in multi-modal representation learning. |
Given the following machine learning model name: WaveTTS, provide a description of the model | **WaveTTS** is a [Tacotron](https://paperswithcode.com/method/tacotron)-based text-to-speech architecture that has two loss functions: 1) time-domain loss, denoted as the waveform loss, that measures the distortion between the natural and generated waveform; and 2) frequency-domain loss, that measures the Mel-scale acoustic feature loss between the natural and generated acoustic features.
The motivation arises from [Tacotron 2](https://paperswithcode.com/method/tacotron-2). Here its feature prediction network is trained independently of the [WaveNet](https://paperswithcode.com/method/wavenet) vocoder. At run-time, the feature prediction network and WaveNet vocoder are artificially joined together. As a result, the framework suffers from the mismatch between frequency-domain acoustic features and time-domain waveform. To overcome such mismatch, WaveTTS uses a joint time-frequency domain loss for TTS that effectively improves the synthesized voice quality. |
Given the following machine learning model name: Sparse Autoencoder, provide a description of the model | A **Sparse Autoencoder** is a type of autoencoder that employs sparsity to achieve an information bottleneck. Specifically the loss function is constructed so that activations are penalized within a layer. The sparsity constraint can be imposed with [L1 regularization](https://paperswithcode.com/method/l1-regularization) or a KL divergence between expected average neuron activation to an ideal distribution $p$.
Image: [Jeff Jordan](https://www.jeremyjordan.me/autoencoders/). Read his blog post (click) for a detailed summary of autoencoders. |
Given the following machine learning model name: Absolute Learning Progress and Gaussian Mixture Models for Automatic Curriculum Learning, provide a description of the model | ALP-GMM is is an algorithm that learns to generate a learning curriculum for black box reinforcement learning agents, whereby it sequentially samples parameters controlling a stochastic procedural generation of tasks or environments. |
Given the following machine learning model name: Surrogate Lagrangian Relaxation, provide a description of the model | Please enter a description about the method here |
Given the following machine learning model name: XLM-R, provide a description of the model | XLM-R |
Given the following machine learning model name: Conditional Convolutions for Instance Segmentation, provide a description of the model | CondInst is a simple yet effective instance segmentation framework. It eliminates ROI cropping and feature alignment with the instance-aware mask heads. As a result, CondInst can solve instance segmentation with fully convolutional networks. CondInst is able to produce high-resolution instance masks without longer computational time. Extensive experiments show that CondInst can achieve even better performance and inference speed than [Mask R-CNN](https://paperswithcode.com/method/mask-r-cnn). It can be a strong alternative to previous ROI-based instance segmentation methods. Code is at https://github.com/aim-uofa/AdelaiDet. |
Given the following machine learning model name: Multi-Heads of Mixed Attention, provide a description of the model | The multi-head of mixed attention combines both self- and cross-attentions, encouraging high-level learning of interactions between entities captured in the various attention features. It is build with several attention heads, each of the head can implement either self or cross attention. A self attention is when the key and query features are the same or come from the same domain features. A cross attention is when the key and query features are generated from different features. Modeling MHMA allows a model to identity the relationship between features of different domains. This is very useful in tasks involving relationship modeling such as human-object interaction, tool-tissue interaction, man-machine interaction, human-computer interface, etc. |
Given the following machine learning model name: SqueezeNeXt, provide a description of the model | **SqueezeNeXt** is a type of convolutional neural network that uses the [SqueezeNet](https://paperswithcode.com/method/squeezenet) architecture as a baseline, but makes a number of changes. First, a more aggressive channel reduction is used by incorporating a two-stage squeeze module. This significantly reduces the total number of parameters used with the 3×3 convolutions. Secondly, it uses separable 3 × 3 convolutions to further reduce the model size, and removes the additional 1×1 branch after the squeeze module. Thirdly, the network use an element-wise addition skip connection similar to that of [ResNet](https://paperswithcode.com/method/resnet) architecture. |
Given the following machine learning model name: Style-based Recalibration Module, provide a description of the model | A **Style-based Recalibration Module (SRM)** is a module for convolutional neural networks that adaptively recalibrates intermediate feature maps by exploiting their styles. SRM first extracts the style information from each channel of the feature maps by style pooling, then estimates per-channel recalibration weight via channel-independent style integration. By incorporating the relative importance of individual styles into feature maps, SRM is aimed at enhancing the representational ability of a CNN.
The overall structure of SRM is illustrated in the Figure to the right. It consists of two main components: style pooling and style integration. The style pooling operator extracts style features
from each channel by summarizing feature responses across spatial dimensions. It is followed by the style integration operator, which produces example-specific style weights by utilizing the style features via channel-wise operation. The style weights finally recalibrate the feature maps to either
emphasize or suppress their information. |
Given the following machine learning model name: Jigsaw, provide a description of the model | **Jigsaw** is a self-supervision approach that relies on jigsaw-like puzzles as the pretext task in order to learn image representations. |
Given the following machine learning model name: K-Net, provide a description of the model | **K-Net** is a framework for unified semantic and instance segmentation that segments both instances and semantic categories consistently by a group of learnable kernels, where each kernel is responsible for generating a mask for either a potential instance or a stuff class. It begins with a set of kernels that are randomly initialized, and learns the kernels in accordance to the segmentation targets at hand, namely, semantic kernels for semantic categories and instance kernels for instance identities. A simple combination of semantic kernels and instance kernels allows panoptic segmentation naturally. In the forward pass, the kernels perform [convolution](https://paperswithcode.com/method/convolution) on the image features to obtain the corresponding segmentation predictions.
K-Net is formulated so that it dynamically updates the kernels to make them conditional to their activations on the image. Such a content-aware mechanism is crucial to ensure that each kernel, especially an instance kernel, responds accurately to varying objects in an image. Through applying this adaptive kernel update strategy iteratively, K-Net significantly improves the discriminative ability of the kernels and boosts the final segmentation performance. It is noteworthy that this strategy universally applies to kernels for all the segmentation tasks.
It also utilises a bipartite matching strategy to assign learning targets for each kernel. This training approach is advantageous to conventional training strategies as it builds a one-to-one mapping between kernels and instances in an image. It thus resolves the problem of dealing with a varying number of instances in an image. In addition, it is purely mask-driven without involving boxes. Hence, K-Net is naturally [NMS](https://paperswithcode.com/method/non-maximum-suppression)-free and box-free, which is appealing to real-time applications. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.