prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: N-step Returns, provide a description of the model | **$n$-step Returns** are used for value function estimation in reinforcement learning. Specifically, for $n$ steps we can write the complete return as:
$$ R\_{t}^{(n)} = r\_{t+1} + \gamma{r}\_{t+2} + \cdots + \gamma^{n-1}\_{t+n} + \gamma^{n}V\_{t}\left(s\_{t+n}\right) $$
We can then write an $n$-step backup, in the style of TD learning, as:
$$ \Delta{V}\_{r}\left(s\_{t}\right) = \alpha\left[R\_{t}^{(n)} - V\_{t}\left(s\_{t}\right)\right] $$
Multi-step returns often lead to faster learning with suitably tuned $n$.
Image Credit: Sutton and Barto, Reinforcement Learning |
Given the following machine learning model name: PeleeNet, provide a description of the model | **PeleeNet** is a convolutional neural network and object detection backbone that is a variation of [DenseNet](https://paperswithcode.com/method/densenet) with optimizations to meet a memory and computational budget. Unlike competing networks, it does not use depthwise convolutions and instead relies on regular convolutions. |
Given the following machine learning model name: GreedyNAS-C, provide a description of the model | **GreedyNAS-C** is a convolutional neural network discovered using the [GreedyNAS](https://paperswithcode.com/method/greedynas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building blocks used are inverted residual blocks (from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2)) and squeeze-and-excitation blocks. |
Given the following machine learning model name: Memory-Associated Differential Learning, provide a description of the model | **Memory-Associated Differential** (**MAD**) Learning was developed to inference from the memorized facts that we already know to predict what we want to know.
Image source: [Luo et al.](https://arxiv.org/pdf/2102.05246v1.pdf) |
Given the following machine learning model name: Tunable Network, provide a description of the model | |
Given the following machine learning model name: WaveGrad DBlock, provide a description of the model | **WaveGrad DBlocks** are used to downsample the temporal dimension of noisy waveform in [WaveGrad](https://paperswithcode.com/method/wavegrad). They are similar to UBlocks except that only one [residual block](https://paperswithcode.com/method/residual-block) is included. The dilation factors are 1, 2, 4 in the main branch. Orthogonal initialization is used. |
Given the following machine learning model name: SAFRAN - Scalable and fast non-redundant rule application, provide a description of the model | SAFRAN is a rule application framework which aggregates rules through a scalable clustering algorithm. |
Given the following machine learning model name: Self-adaptive Training, provide a description of the model | **Self-adaptive Training** is a training algorithm that dynamically corrects problematic training labels by model predictions to improve generalization of deep learning for potentially corrupted training data. Accumulated predictions are used to augment the training dynamics. The use of an exponential-moving-average scheme alleviates the instability issue of model predictions, smooths out the training target during the training process and enables the algorithm to completely change the training labels if necessary. |
Given the following machine learning model name: Harmonic Block, provide a description of the model | A **Harmonic Block** is an image model component that utilizes [Discrete Cosine Transform](https://paperswithcode.com/method/discrete-cosine-transform) (DCT) filters. Convolutional neural networks (CNNs) learn filters in order to capture local correlation patterns in feature space. In contrast, DCT has preset spectral filters, which can be better for compressing information (due to the presence of redundancy in the spectral domain).
DCT has been successfully used for JPEG encoding to transform image blocks into spectral representations to capture the most information with a small number of coefficients. Harmonic blocks learn how to optimally combine spectral coefficients at every layer to produce a fixed size representation defined as a weighted sum of responses to DCT filters. The use of DCT filters allows to address the task of model compression. |
Given the following machine learning model name: Sharpness-Aware Minimization, provide a description of the model | **Sharpness-Aware Minimization**, or **SAM**, is a procedure that improves model generalization by simultaneously minimizing loss value and loss sharpness. SAM functions by seeking parameters that lie in neighborhoods having uniformly low loss value (rather than parameters that only themselves have low loss value). |
Given the following machine learning model name: Sinusoidal Representation Network, provide a description of the model | **Siren**, or **Sinusoidal Representation Network**, is a periodic activation function for implicit neural representations. Specifically it uses the sine as a periodic activation function:
$$ \Phi\left(x\right) = \textbf{W}\_{n}\left(\phi\_{n-1} \circ \phi\_{n-2} \circ \dots \circ \phi\_{0} \right) $$ |
Given the following machine learning model name: CharacterBERT, provide a description of the model | CharacterBERT is a variant of [BERT](https://paperswithcode.com/method/bert) that **drops the wordpiece system** and **replaces it with a CharacterCNN module** just like the one [ELMo](https://paperswithcode.com/method/elmo) uses to produce its first layer representation. This allows CharacterBERT to represent any input token without splitting it into wordpieces. Moreover, this frees BERT from the burden of a domain-specific wordpiece vocabulary which may not be suited to your domain of interest (e.g. medical domain). Finally, it allows the model to be more robust to noisy inputs. |
Given the following machine learning model name: Fast-YOLOv3, provide a description of the model | |
Given the following machine learning model name: Self-Training with Task Augmentation, provide a description of the model | **STraTA**, or **Self-Training with Task Augmentation**, is a self-training approach that builds on two key ideas for effective leverage of unlabeled data. First, STraTA uses task augmentation, a technique that synthesizes a large amount of data for auxiliary-task fine-tuning from target-task unlabeling texts. Second, STRATA performs self-training by further fine-tuning the strong base model created by task augmentation on a broad distribution of pseudo-labeled data.
In task augmentation, we train an NLI data generation model and use it to synthesize a large amount of in-domain NLI training data for each given target task, which is then used for auxiliary (intermediate) fine-tuning. The self-training algorithm iteratively learns a better model using a concatenation of labeled and pseudo-labeled examples. At each iteration, we always start with the auxiliary-task model produced by task augmentation and train on a broad distribution of pseudo-labeled data. |
Given the following machine learning model name: Involution, provide a description of the model | **Involution** is an atomic operation for deep neural networks that inverts the design principles of convolution. Involution kernels are distinct in the spatial extent but shared across channels. If involution kernels are parameterized as fixed-sized matrices like convolution kernels and updated using the back-propagation algorithm, the learned involution kernels are impeded from transferring between input images with variable resolutions.
The authors argue for two benefits of involution over convolution: (i) involution can summarize the context in a wider spatial arrangement, thus overcome the difficulty of modeling long-range interactions well; (ii) involution can adaptively allocate the weights over different positions, so as to prioritize the most informative visual elements in the spatial domain. |
Given the following machine learning model name: Lbl2Vec, provide a description of the model | |
Given the following machine learning model name: Scale Aggregation Block, provide a description of the model | A **Scale Aggregation Block** concatenates feature maps at a wide range of scales. Feature maps for each scale are generated by a stack of downsampling, [convolution](https://paperswithcode.com/method/convolution) and upsampling operations. The proposed scale aggregation block is a standard computational module which readily replaces any given transformation $\mathbf{Y}=\mathbf{T}(\mathbf{X})$, where $\mathbf{X}\in \mathbb{R}^{H\times W\times C}$, $\mathbf{Y}\in \mathbb{R}^{H\times W\times C_o}$ with $C$ and $C_o$ being the input and output channel number respectively. $\mathbf{T}$ is any operator such as a convolution layer or a series of convolution layers. Assume we have $L$ scales. Each scale $l$ is generated by sequentially conducting a downsampling $\mathbf{D}_l$, a transformation $\mathbf{T}_l$ and an unsampling operator $\mathbf{U}_l$:
$$
\mathbf{X}^{'}_l=\mathbf{D}_l(\mathbf{X}),
\label{eq:eq_d}
$$
$$
\mathbf{Y}^{'}_l=\mathbf{T}_l(\mathbf{X}^{'}_l),
\label{eq:eq_tl}
$$
$$
\mathbf{Y}_l=\mathbf{U}_l(\mathbf{Y}^{'}_l),
\label{eq:eq_u}
$$
where $\mathbf{X}^{'}_l\in \mathbb{R}^{H_l\times W_l\times C}$,
$\mathbf{Y}^{'}_l\in \mathbb{R}^{H_l\times W_l\times C_l}$, and
$\mathbf{Y}_l\in \mathbb{R}^{H\times W\times C_l}$.
Notably, $\mathbf{T}_l$ has the similar structure as $\mathbf{T}$.
We can concatenate all $L$ scales together, getting
$$
\mathbf{Y}^{'}=\Vert^L_1\mathbf{U}_l(\mathbf{T}_l(\mathbf{D}_l(\mathbf{X}))),
\label{eq:eq_all}
$$
where $\Vert$ indicates concatenating feature maps along the channel dimension, and $\mathbf{Y}^{'} \in \mathbb{R}^{H\times W\times \sum^L_1 C_l}$ is the final output feature maps of the scale aggregation block.
In the reference implementation, the downsampling $\mathbf{D}_l$ with factor $s$ is implemented by a max pool layer with $s\times s$ kernel size and $s$ stride. The upsampling $\mathbf{U}_l$ is implemented by resizing with the nearest neighbor interpolation. |
Given the following machine learning model name: Implicit Subspace Prior Learning, provide a description of the model | **Implicit Subspace Prior Learning**, or **ISPL**, is a framework to approach dual-blind face restoration, with two major distinctions from previous restoration methods: 1) Instead of assuming an explicit degradation function between LQ and HQ domain, it establishes an implicit correspondence between both domains via a mutual embedding space, thus avoid solving the pathological inverse problem directly. 2) A subspace prior decomposition and fusion mechanism to dynamically handle inputs at varying degradation levels with consistent high-quality restoration results. |
Given the following machine learning model name: Point-GNN, provide a description of the model | **Point-GNN** is a graph neural network for detecting objects from a LiDAR point cloud. It predicts the category and shape of the object that each vertex in the graph belongs to. In Point-GNN, there is an auto-registration mechanism to reduce translation variance, as well as a box merging and scoring operation to combine detections from multiple vertices accurately. |
Given the following machine learning model name: SERLU, provide a description of the model | **SERLU**, or **Scaled Exponentially-Regularized Linear Unit**, is a type of activation function. The new function introduces a bump-shaped function in the region of negative input. The bump-shaped function has approximately zero response to large negative input while being able to push the output of SERLU towards zero mean statistically.
$$ \text{SERLU}\left(x\right)) = \lambda\_{serlu}x \text{ if } x \geq 0 $$
$$ \text{SERLU}\left(x\right)) = \lambda\_{serlu}\alpha\_{serlu}xe^{x} \text{ if } x < 0 $$
where the two parameters $\lambda\_{serlu} > 0$ and $\alpha\_{serlu} > 0$ remain to be specified. |
Given the following machine learning model name: ClariNet, provide a description of the model | **ClariNet** is an end-to-end text-to-speech architecture. Unlike previous TTS systems which use text-to-spectogram models with a separate waveform [synthesizer](https://paperswithcode.com/method/synthesizer) (vocoder), ClariNet is a text-to-wave architecture that is fully convolutional and can be trained from scratch. In ClariNet, the [WaveNet](https://paperswithcode.com/method/wavenet) module is conditioned on the hidden states instead of the mel-spectogram. The architecture is otherwise based on [Deep Voice 3](https://paperswithcode.com/method/deep-voice-3). |
Given the following machine learning model name: Step Decay, provide a description of the model | **Step Decay** is a learning rate schedule that drops the learning rate by a factor every few epochs, where the number of epochs is a hyperparameter.
Image Credit: [Suki Lau](https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-learning-2c8f433990d1) |
Given the following machine learning model name: Panoptic FPN, provide a description of the model | A **Panoptic FPN** is an extension of an [FPN](https://paperswithcode.com/method/fpn) that can generate both instance and semantic segmentations via FPN. The approach starts with an FPN backbone and adds a branch for performing semantic segmentation in parallel with the existing region-based branch for instance segmentation. No changes are made to the FPN backbone when adding the dense-prediction branch, making it compatible with existing instance segmentation methods.
The new semantic segmentation branch achieves its goal as follows. Starting from the deepest FPN level (at 1/32 scale), we perform three upsampling stages to yield a feature map at 1/4 scale, where each upsampling stage consists of 3×3 [convolution](https://paperswithcode.com/method/convolution), group norm, [ReLU](https://paperswithcode.com/method/relu), and 2× bilinear upsampling. This strategy is repeated for FPN scales 1/16, 1/8, and 1/4 (with progressively fewer upsampling stages). The result is a set of feature maps at the same 1/4 scale, which are then element-wise summed. A final 1×1 convolution, 4× bilinear upsampling, and [softmax](https://paperswithcode.com/method/softmax) are used to generate the per-pixel class labels at the original image resolution. In addition to stuff classes, this branch also outputs a special ‘other’ class for all pixels belonging to objects (to avoid predicting stuff classes for such pixels). |
Given the following machine learning model name: Deep Deterministic Policy Gradient, provide a description of the model | **DDPG**, or **Deep Deterministic Policy Gradient**, is an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. It combines the actor-critic approach with insights from [DQNs](https://paperswithcode.com/method/dqn): in particular, the insights that 1) the network is trained off-policy with samples from a replay buffer to minimize correlations between samples, and 2) the network is trained with a target Q network to give consistent targets during temporal difference backups. DDPG makes use of the same ideas along with [batch normalization](https://paperswithcode.com/method/batch-normalization). |
Given the following machine learning model name: Hourglass Module, provide a description of the model | An **Hourglass Module** is an image block module used mainly for pose estimation tasks. The design of the hourglass is motivated by the need to capture information at every scale. While local evidence is essential for identifying features like faces and hands, a final pose estimate requires a coherent understanding of the full body. The person’s orientation, the arrangement of their limbs, and the relationships of adjacent joints are among the many cues that are best recognized at different scales in the image. The hourglass is a simple, minimal design that has the capacity to capture all of these features and bring them together to output pixel-wise predictions.
The network must have some mechanism to effectively process and consolidate features across scales. The Hourglass uses a single pipeline with skip layers to preserve spatial information at each resolution. The network reaches its lowest resolution at 4x4 pixels allowing smaller spatial filters to be applied that compare features across the entire space of the image.
The hourglass is set up as follows: Convolutional and [max pooling](https://paperswithcode.com/method/max-pooling) layers are used to process features down to a very low resolution. At each max pooling step, the network branches off and applies more convolutions at the original pre-pooled resolution. After reaching the lowest resolution, the network begins the top-down sequence of upsampling and combination of features across scales. To bring together information across two adjacent resolutions, we do nearest neighbor upsampling of the lower resolution followed by an elementwise addition of the two sets of features. The topology of the hourglass is symmetric, so for every layer present on the way down there is a corresponding layer going up.
After reaching the output resolution of the network, two consecutive rounds of 1x1 convolutions are applied to produce the final network predictions. The output of the network is a set of heatmaps where for a given [heatmap](https://paperswithcode.com/method/heatmap) the network predicts the probability of a joint’s presence at each and every pixel. |
Given the following machine learning model name: Latent Optimisation, provide a description of the model | **Latent Optimisation** is a technique used for generative adversarial networks to refine the sample quality of $z$. Specifically, it exploits knowledge from the discriminator $D$ to refine the latent source $z$. Intuitively, the gradient $\nabla\_{z}f\left(z\right) = \delta{f}\left(z\right)\delta{z}$ points in the direction that better satisfies the discriminator $D$, which implies better samples. Therefore, instead of using the randomly sampled $z \sim p\left(z\right)$, we uses the optimised latent:
$$ \Delta{z} = \alpha\frac{\delta{f}\left(z\right)}{\delta{z}} $$
$$ z' = z + \Delta{z} $$
Source: [LOGAN](https://paperswithcode.com/method/logan)
. |
Given the following machine learning model name: RESCAL, provide a description of the model | RESCAL |
Given the following machine learning model name: Florence, provide a description of the model | Florence is a computer vision foundation model aiming to learn universal visual-language representations that be adapted to various computer vision tasks, visual question answering, image captioning, video retrieval, among other tasks. Florence's workflow consists of data curation, unified learning, Transformer architectures and adaption. Florence is pre-trained in an image-label-description space, utilizing a unified image-text contrastive learning. It involves a two-tower architecture: 12-layer Transformer for the language encoder, and a Vision Transformer for the image encoder. Two linear projection layers are added on top of the image encoder and language encoder to match the dimensions of image and language features. Compared to previous methods for cross-modal shared representations, Florence expands beyond simple classification and retrieval capabilities to advanced representations that support object level, multiple modality, and videos respectively. |
Given the following machine learning model name: CuBERT, provide a description of the model | **CuBERT**, or **Code Understanding BERT**, is a [BERT](https://paperswithcode.com/method/bert) based model for code understanding. In order to achieve this, the authors curate a massive corpus of Python programs collected from GitHub. GitHub projects are known to contain a large amount of duplicate code. To avoid biasing the model to such duplicated code, authors perform deduplication using the method of [Allamanis (2018)](https://arxiv.org/abs/1812.06469). The resulting corpus has 7.4 million files with a total of 9.3 billion tokens (16 million unique). |
Given the following machine learning model name: Differentiable Architecture Search, provide a description of the model | **Differentiable Architecture Search** (**DART**) is a method for efficient architecture search. The search space is made continuous so that the architecture can be optimized with respect to its validation set performance through gradient descent. |
Given the following machine learning model name: MARLIN, provide a description of the model | |
Given the following machine learning model name: Fraternal Dropout, provide a description of the model | **Fraternal Dropout** is a regularization method for recurrent neural networks that trains two identical copies of an RNN (that share parameters) with different [dropout](https://paperswithcode.com/method/dropout) masks while minimizing the difference between their (pre-[softmax](https://paperswithcode.com/method/softmax)) predictions. This encourages the representations of RNNs to be invariant to dropout mask, thus being robust. |
Given the following machine learning model name: Stochastic Gradient Descent, provide a description of the model | **Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:
$$ w\_{t+1} = w\_{t} - \eta\hat{\nabla}\_{w}{L(w\_{t})} $$
Where $\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.
(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/)) |
Given the following machine learning model name: Segregated Attention Network, provide a description of the model | |
Given the following machine learning model name: Switchable Atrous Convolution, provide a description of the model | **Switchable Atrous Convolution (SAC)** softly switches the convolutional computation between different atrous rates and gathers the results using switch functions. The switch functions are spatially dependent, i.e., each location of the feature map might have different switches to control the outputs of SAC. To use SAC in a detector, we convert all the standard 3x3 convolutional layers in the bottom-up backbone to SAC. |
Given the following machine learning model name: Pointer Sentinel-LSTM, provide a description of the model | The **Pointer Sentinel-LSTM mixture model** is a type of recurrent neural network that combines the advantages of standard [softmax](https://paperswithcode.com/method/softmax) classifiers with those of a pointer component for effective and efficient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, the model allows the pointer component itself to decide when to use the softmax vocabulary through a sentinel. |
Given the following machine learning model name: MADDPG, provide a description of the model | **MADDPG**, or **Multi-agent DDPG**, extends [DDPG](https://paperswithcode.com/method/ddpg) into a multi-agent policy gradient algorithm where decentralized agents learn a centralized critic based on the observations and actions of all agents. It leads to learned policies that only use local information (i.e. their own observations) at execution time, does not assume a differentiable model of the environment dynamics or any particular structure on the communication method between agents, and is applicable not only to cooperative interaction but to competitive or mixed interaction involving both physical and communicative behavior. The critic is augmented with extra information about the policies of other agents, while the actor only has access to local information. After training is completed, only the local actors are used at execution phase, acting in a decentralized manner. |
Given the following machine learning model name: RESCAL with Relation Prediction, provide a description of the model | RESCAL model trained with a relation prediction objective on top of the 1vsAll loss |
Given the following machine learning model name: MuZero, provide a description of the model | **MuZero** is a model-based reinforcement learning algorithm. It builds upon [AlphaZero](https://paperswithcode.com/method/alphazero)'s search and search-based policy iteration algorithms, but incorporates a learned model into the training procedure.
The main idea of the algorithm is to predict those aspects of the future that are directly relevant for planning. The model receives the observation (e.g. an image of the Go board or the Atari screen) as an
input and transforms it into a hidden state. The hidden state is then updated iteratively by a recurrent process that receives the previous hidden state and a hypothetical next action. At every one of these steps the model predicts the policy (e.g. the move to play), value function (e.g. the predicted winner), and immediate reward (e.g. the points scored by playing a move). The model is trained end-to-end, with the sole objective of accurately estimating these three important quantities, so as to match the improved estimates of policy and value generated by search as well as the observed reward.
There is no direct constraint or requirement for the hidden state to capture all information necessary to reconstruct the original observation, drastically reducing the amount of information the model has to maintain and predict; nor is there any requirement for the hidden state to match the unknown, true state of the environment; nor any other constraints on the semantics of state. Instead, the hidden states are free to represent state in whatever way is relevant to predicting current and future values and policies. Intuitively, the agent can invent, internally, the rules or dynamics that lead to most accurate planning. |
Given the following machine learning model name: Flow Alignment Module, provide a description of the model | **Flow Alignment Module**, or **FAM**, is a flow-based align module for scene parsing to learn Semantic Flow between feature maps of adjacent levels and broadcast high-level features to high resolution features effectively and efficiently. The concept of Semantic Flow is inspired from optical flow, which is widely used in video processing task to represent the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by relative motion. The authors postulate that the relatinship between two feature maps of arbitrary resolutions from the same image can also be represented with the “motion” of every pixel from one feature map to the other one. Once precise Semantic Flow is obtained, the network is able to propagate semantic features with minimal information loss.
In the FAM module, the transformed high-resolution feature map are combined with the low-resolution feature map to generate the semantic flow field, which is utilized to warp the low-resolution feature map to high-resolution feature map. |
Given the following machine learning model name: StarReLU, provide a description of the model | $s \cdot (\mathrm{ReLU}(x))^2 + b$
where $s \in \mathbb{R}$ and $b \in \mathbb{R}$ are shared for all channels and can be set as constants (s=0.8944, b=-0.4472) or learnable parameters. |
Given the following machine learning model name: GCNII, provide a description of the model | **GCNII** is an extension of a [Graph Convolution Networks](https://www.paperswithcode.com/method/gcn) with two new techniques, initial residual and identify mapping, to tackle the problem of oversmoothing -- where stacking more layers and adding non-linearity tends to degrade performance. At each layer, initial residual constructs a skip connection from the input layer, while identity mapping adds an identity matrix to the weight matrix. |
Given the following machine learning model name: Generalizable Node Injection Attack, provide a description of the model | **Generalizable Node Injection Attack**, or **G-NIA**, is an attack scenario for graph neural networks where the attacker injects malicious nodes rather than modifying original nodes or edges to affect the performance of GNNs. G-NIA generates the discrete edges also by Gumbel-Top-𝑘 following OPTI and captures the coupling effect between network structure and node features by a sophisticated designed model.
G-NIA explicitly models the most critical feature propagation via jointly modeling. Specifically, the malicious attributes are adopted to guide the generation of edges, modeling the influence of attributes and edges. G-NIA also adopts a model-based framework, utilizing useful information of attacking during model training, as well as saving computational cost during inference without re-optimization. |
Given the following machine learning model name: Gated Linear Network, provide a description of the model | A **Gated Linear Network**, or **GLN**, is a type of backpropagation-free neural architecture. What distinguishes GLNs from contemporary neural networks is the distributed and local nature of their credit assignment mechanism; each neuron directly predicts the target, forgoing the ability to learn feature representations in favor of rapid online learning. Individual neurons can model nonlinear functions via the use of data-dependent gating in conjunction with online convex optimization.
GLNs are feedforward networks composed of many layers of gated geometric mixing neurons as shown in the Figure . Each neuron in a given layer outputs a gated geometric mixture of the predictions from the previous layer, with the final layer consisting of just a single neuron. In a supervised learning setting, a $\mathrm{GLN}$ is trained on (side information, base predictions, label) triplets $\left(z\_{t}, p\_{t}, x\_{t}\right)_{t=1,2,3, \ldots}$ derived from input-label pairs $\left(z\_{t}, x\_{t}\right)$. There are two types of input to neurons in the network: the first is the side information $z\_{t}$, which can be thought of as the input features; the second is the input to the neuron, which will be the predictions output by the previous layer, or in the case of layer 0 , some (optionally) provided base predictions $p\_{t}$ that typically will be a function of $z\_{t} .$ Each neuron will also take in a constant bias prediction, which helps empirically and is essential for universality guarantees.
Weights are learnt in a Gated Linear Network using Online Gradient Descent (OGD) locally at each neuron. They key observation is that as each neuron $(i, k)$ in layers $i>0$ is itself a gated geometric mixture, all of these neurons can be thought of as individually predicting the target. Given side information $z$ , each neuron $(i, k)$ suffers a loss convex in its active weights $u:=w\_{i k c\_{i k}(z)}$ of
$$
\ell\_{t}(u):=-\log \left(\operatorname{GEO}\_{u}\left(x_{t} ; p\_{i-1}\right)\right)
$$ |
Given the following machine learning model name: PREDATOR, provide a description of the model | **PREDATOR** is a model for pairwise point-cloud registration with deep attention to the overlap region. Its key novelty is an overlap-attention block for early information exchange between the latent encodings of the two point clouds. In this way the subsequent decoding of the latent representations into per-point features is conditioned on the respective other point cloud, and thus can predict which points are not only salient, but also lie in the overlap region between the two point clouds. |
Given the following machine learning model name: Visformer, provide a description of the model | **Visformer**, or **Vision-friendly Transformer**, is an architecture that combines [Transformer](https://paperswithcode.com/methods/category/transformers)-based architectural features with those from [convolutional neural network](https://paperswithcode.com/methods/category/convolutional-neural-networks) architectures. Visformer adopts the stage-wise design for higher base performance. But [self-attentions](https://paperswithcode.com/method/multi-head-attention) are only utilized in the last two stages, considering that self-attention in the high-resolution stage is relatively inefficient even when the FLOPs are balanced. Visformer employs [bottleneck blocks](https://paperswithcode.com/method/bottleneck-residual-block) in the first stage and utilizes [group 3 × 3 convolutions](https://paperswithcode.com/method/grouped-convolution) in bottleneck blocks inspired by [ResNeXt](https://paperswithcode.com/method/resnext). It also introduces [BatchNorm](https://paperswithcode.com/method/batch-normalization) to patch embedding modules as in CNNs. |
Given the following machine learning model name: AdapTive Meta Optimizer, provide a description of the model | This method combines multiple optimization techniques like [ADAM](https://paperswithcode.com/method/adam) and [SGD](https://paperswithcode.com/method/sgd) or PADAM. This method can be applied to any couple of optimizers.
Image credit: [Combining Optimization Methods Using an Adaptive Meta Optimizer](https://www.mdpi.com/1999-4893/14/6/186) |
Given the following machine learning model name: Residual Normal Distribution, provide a description of the model | **Residual Normal Distributions** are used to help the optimization of VAEs, preventing optimization from entering an unstable region. This can happen due to sharp gradients caused in situations where the encoder and decoder produce distributions far away from each other. The residual distribution parameterizes $q\left(\mathbf{z}|\mathbf{x}\right)$ relative to $p\left(\mathbf{z}\right)$. Let $p\left(z^{i}\_{l}|\mathbf{z}\_{<l}\right) := N \left(\mu\_{i}\left(\mathbf{z}\_{<l}\right), \sigma\_{i}\left(\mathbf{z}\_{<l}\right)\right)$ be a Normal distribution for the $i$th variable in $\mathbf{z}\_{l}$ in prior. Define $q\left(z^{i}\_{l}|\mathbf{z}\_{<l}, x\right) := N\left(\mu\_{i}\left(\mathbf{z}\_{<l}\right) + \Delta\mu\_{i}\left(\mathbf{z}\_{<l}, x\right), \sigma\_{i}\left(\mathbf{z}\_{<l}\right) \cdot \Delta\sigma\_{i}\left(\mathbf{z}\_{<l}, x\right) \right)$, where $\Delta\mu\_{i}\left(\mathbf{z}\_{<l}, \mathbf{x}\right)$ and $\Delta\sigma\_{i}\left(\mathbf{z}\_{<l}, \mathbf{x}\right)$ are the relative location and scale of the approximate posterior with respect to the prior. With this parameterization, when the prior moves, the approximate posterior moves accordingly, if not changed. |
Given the following machine learning model name: ZeRO-Offload, provide a description of the model | ZeRO-Offload is a sharded data parallel method for distributed training. It exploits both CPU memory and compute for offloading, while offering a clear path towards efficiently scaling on multiple GPUs by working with [ZeRO-powered data parallelism](https://www.paperswithcode.com/method/zero). The symbiosis allows ZeRO-Offload to maintain a single copy of the optimizer states on the CPU memory regardless of the data parallel degree. Furthermore, it keeps the aggregate communication volume between GPU and CPU, as well as the aggregate CPU computation a constant regardless of data parallelism, allowing ZeRO-Offload to effectively utilize the linear increase in CPU compute with the increase in the data parallelism degree. |
Given the following machine learning model name: Contextual Decomposition Explanation Penalization, provide a description of the model | **Contextual Decomposition Explanation Penalization (CDEP)** is a method which leverages existing explanation techniques for neural networks in order to prevent a model from learning
unwanted relationships and ultimately improve predictive accuracy. Given particular importance
scores, CDEP works by allowing the user to directly penalize importances of certain features, or
interactions. This forces the neural network to not only produce the correct prediction, but also the
correct explanation for that prediction |
Given the following machine learning model name: Chinese Pre-trained Unbalanced Transformer, provide a description of the model | **CPT**, or **Chinese Pre-trained Unbalanced Transformer**, is a pre-trained unbalanced [Transformer](https://paperswithcode.com/method/transformer) for Chinese natural language understanding (NLU) and natural language generation (NLG) tasks. CPT consists of three parts: a shared encoder, an understanding decoder, and a generation decoder. Two specific decoders with a shared encoder are pre-trained with masked language modeling (MLM) and denoising auto-encoding (DAE) tasks, respectively. With the partially shared architecture and multi-task pre-training, CPT can (1) learn specific knowledge of both NLU or NLG tasks with two decoders and (2) be fine-tuned flexibly that fully exploits the potential of the model. Two specific decoders with a shared encoder are pre-trained with masked language modeling (MLM) and denoising auto-encoding (DAE) tasks, respectively. With the partially shared architecture and multi-task pre-training, CPT can (1) learn specific knowledge of both NLU or NLG tasks with two decoders and (2) be fine-tuned flexibly that fully exploits the potential of the model. |
Given the following machine learning model name: RealNVP, provide a description of the model | **RealNVP** is a generative model that utilises real-valued non-volume preserving (real NVP) transformations for density estimation. The model can perform efficient and exact inference, sampling and log-density estimation of data points. |
Given the following machine learning model name: Depthwise Convolution, provide a description of the model | **Depthwise Convolution** is a type of convolution where we apply a single convolutional filter for each input channel. In the regular 2D [convolution](https://paperswithcode.com/method/convolution) performed over multiple input channels, the filter is as deep as the input and lets us freely mix channels to generate each element in the output. In contrast, depthwise convolutions keep each channel separate. To summarize the steps, we:
1. Split the input and filter into channels.
2. We convolve each input with the respective filter.
3. We stack the convolved outputs together.
Image Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728) |
Given the following machine learning model name: Continuous Bag-of-Words Word2Vec, provide a description of the model | **Continuous Bag-of-Words Word2Vec** is an architecture for creating word embeddings that uses $n$ future words as well as $n$ past words to create a word embedding. The objective function for CBOW is:
$$ J\_\theta = \frac{1}{T}\sum^{T}\_{t=1}\log{p}\left(w\_{t}\mid{w}\_{t-n},\ldots,w\_{t-1}, w\_{t+1},\ldots,w\_{t+n}\right) $$
In the CBOW model, the distributed representations of context are used to predict the word in the middle of the window. This contrasts with [Skip-gram Word2Vec](https://paperswithcode.com/method/skip-gram-word2vec) where the distributed representation of the input word is used to predict the context. |
Given the following machine learning model name: Dual Attention Network, provide a description of the model | In the field of scene segmentation,
encoder-decoder structures cannot make use of the global relationships
between objects, whereas RNN-based structures
heavily rely on the output of the long-term memorization.
To address the above problems,
Fu et al. proposed a novel framework,
the dual attention network (DANet),
for natural scene image segmentation.
Unlike CBAM and BAM, it adopts a self-attention mechanism
instead of simply stacking convolutions to compute the spatial attention map,
which enables the network to capture global information directly.
DANet uses in parallel a position attention module and a channel attention module to capture feature dependencies in spatial and channel domains. Given the input feature map $X$, convolution layers are applied first in the position attention module to obtain new feature maps. Then the position attention module selectively aggregates the features at each position using a weighted sum of features at all positions, where the weights are determined by feature similarity between corresponding pairs of positions. The channel attention module has a similar form except for dimensional reduction to model cross-channel relations. Finally the outputs from the two branches are fused to obtain final feature representations. For simplicity, we reshape the feature map $X$ to $C\times (H \times W)$ whereupon the overall process can be written as
\begin{align}
Q,\quad K,\quad V &= W_qX,\quad W_kX,\quad W_vX
\end{align}
\begin{align}
Y^\text{pos} &= X+ V\text{Softmax}(Q^TK)
\end{align}
\begin{align}
Y^\text{chn} &= X+ \text{Softmax}(XX^T)X
\end{align}
\begin{align}
Y &= Y^\text{pos} + Y^\text{chn}
\end{align}
where $W_q$, $W_k$, $W_v \in \mathbb{R}^{C\times C}$ are used to generate new feature maps.
The position attention module enables
DANet to capture long-range contextual information
and adaptively integrate similar features at any scale
from a global viewpoint,
while the channel attention module is responsible for
enhancing useful channels
as well as suppressing noise.
Taking spatial and channel
relationships into consideration explicitly
improves the feature representation for scene segmentation.
However, it is computationally costly, especially for large input feature maps. |
Given the following machine learning model name: Semi-Pseudo-Label, provide a description of the model | |
Given the following machine learning model name: Concatenated Skip Connection, provide a description of the model | A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates. |
Given the following machine learning model name: MeshGraphNet, provide a description of the model | **MeshGraphNet** is a framework for learning mesh-based simulations using [graph neural networks](https://paperswithcode.com/methods/category/graph-models). The model can be trained to pass messages on a mesh graph and to adapt the mesh discretization during forward simulation. The model uses an Encode-Process-Decode architecture trained with one-step supervision, and can be applied iteratively to generate long trajectories at inference time. The encoder transforms the input mesh $M^{t}$ into a graph, adding extra world-space edges. The processor performs several rounds of message passing along mesh edges and world edges, updating all node and edge embeddings. The decoder extracts the acceleration for each node, which is used to update the mesh to produce $M^{t+1}$. |
Given the following machine learning model name: Pairwise Constrained KMeans, provide a description of the model | A variant of the popular k-means algorithm that integrates constraint satisfaction into its objective function.
Original paper : Active Semi-Supervision for Pairwise Constrained Clustering, Basu et al. 2004 |
Given the following machine learning model name: Graph Isomorphism Network, provide a description of the model | Per the authors, Graph Isomorphism Network (GIN) generalizes the WL test and hence achieves maximum discriminative power among GNNs. |
Given the following machine learning model name: Low-level backbone, provide a description of the model | |
Given the following machine learning model name: RevNet, provide a description of the model | A **Reversible Residual Network**, or **RevNet**, is a variant of a [ResNet](https://paperswithcode.com/method/resnet) where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. The result is a network architecture whose activation storage requirements are independent of depth, and typically at least an order of magnitude smaller compared with equally sized ResNets.
RevNets are composed of a series of reversible blocks. Units in each layer are partitioned into two groups, denoted $x\_{1}$ and $x\_{2}$; the authors find what works best is partitioning the channels. Each reversible block takes inputs $\left(x\_{1}, x\_{2}\right)$ and produces outputs $\left(y\_{1}, y\_{2}\right)$ according to the following additive coupling rules – inspired the transformation in [NICE](https://paperswithcode.com/method/nice) (nonlinear independent components estimation) – and residual functions $F$ and $G$ analogous to those in standard ResNets:
$$y\_{1} = x\_{1} + F\left(x\_{2}\right)$$
$$y\_{2} = x\_{2} + G\left(y\_{1}\right)$$
Each layer’s activations can be reconstructed from the next layer’s activations as follows:
$$ x\_{2} = y\_{2} − G\left(y\_{1}\right)$$
$$ x\_{1} = y\_{1} − F\left(x\_{2}\right)$$
Note that unlike residual blocks, reversible blocks must have a stride of 1 because otherwise the layer
discards information, and therefore cannot be reversible. Standard ResNet architectures typically
have a handful of layers with a larger stride. If we define a RevNet architecture analogously, the
activations must be stored explicitly for all non-reversible layers. |
Given the following machine learning model name: Softsign Activation, provide a description of the model | **Softsign** is an activation function for neural networks:
$$ f\left(x\right) = \left(\frac{x}{|x|+1}\right)$$
Image Source: [Sefik Ilkin Serengil](https://sefiks.com/2017/11/10/softsign-as-a-neural-networks-activation-function/) |
Given the following machine learning model name: Cyclical Learning Rate Policy, provide a description of the model | A **Cyclical Learning Rate Policy** combines a linear learning rate decay with warm restarts.
Image: [ESPNetv2](https://paperswithcode.com/method/espnetv2) |
Given the following machine learning model name: NPID, provide a description of the model | **NPID** (Non-Parametric Instance Discrimination) is a self-supervision approach that takes a non-parametric classification approach. Noise contrastive estimation is used to learn representations. Specifically, distances (similarity) between instances are calculated directly from the features in a non-parametric way. |
Given the following machine learning model name: Amplifying Sine Unit: An Oscillatory Activation Function for Deep Neural Networks to Recover Nonlinear Oscillations Efficiently, provide a description of the model | 2023 |
Given the following machine learning model name: Quantum Process Tomography, provide a description of the model | |
Given the following machine learning model name: NAS-FPN, provide a description of the model | **NAS-FPN** is a Feature Pyramid Network that is discovered via [Neural Architecture Search](https://paperswithcode.com/method/neural-architecture-search) in a novel scalable search space covering all cross-scale connections. The discovered architecture consists of a combination of top-down and bottom-up connections to fuse features across scales |
Given the following machine learning model name: Gaussian Error Linear Units, provide a description of the model | The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\Phi(x)$, where $\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.
$$\text{GELU}\left(x\right) = x{P}\left(X\leq{x}\right) = x\Phi\left(x\right) = x \cdot \frac{1}{2}\left[1 + \text{erf}(x/\sqrt{2})\right],$$
if $X\sim \mathcal{N}(0,1)$.
One can approximate the GELU with
$0.5x\left(1+\tanh\left[\sqrt{2/\pi}\left(x + 0.044715x^{3}\right)\right]\right)$ or $x\sigma\left(1.702x\right),$
but PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\sigma(x)$ which was also coined in the paper that introduced the GELU.)
GELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers. |
Given the following machine learning model name: CutMix, provide a description of the model | **CutMix** is an image data augmentation strategy. Instead of simply removing pixels as in [Cutout](https://paperswithcode.com/method/cutout), we replace the removed regions with a patch from another image. The ground truth labels are also mixed proportionally to the number of pixels of combined images. The added patches further enhance localization ability by requiring the model to identify the object from a partial view. |
Given the following machine learning model name: Denoising Autoencoder, provide a description of the model | A **Denoising Autoencoder** is a modification on the [autoencoder](https://paperswithcode.com/method/autoencoder) to prevent the network learning the identity function. Specifically, if the autoencoder is too big, then it can just learn the data, so the output equals the input, and does not perform any useful representation learning or dimensionality reduction. Denoising autoencoders solve this problem by corrupting the input data on purpose, adding noise or masking some of the input values.
Image Credit: [Kumar et al](https://www.semanticscholar.org/paper/Static-hand-gesture-recognition-using-stacked-Kumar-Nandi/5191ddf3f0841c89ba9ee592a2f6c33e4a40d4bf) |
Given the following machine learning model name: Feedback Transformer, provide a description of the model | A **Feedback Transformer** is a type of sequential transformer that exposes all previous representations to all future representations, meaning the lowest representation of the current timestep is formed from the highest-level abstract representation of the past. This feedback nature allows this architecture to perform recursive computation, building stronger representations iteratively upon previous states. To achieve this, the self-attention mechanism of the standard [Transformer](https://paperswithcode.com/method/transformer) is modified so it attends to higher level representations rather than lower ones. |
Given the following machine learning model name: BigBiGAN, provide a description of the model | **BigBiGAN** is a type of [BiGAN](https://paperswithcode.com/method/bigan) with a [BigGAN](https://paperswithcode.com/method/biggan) image generator. The authors initially used [ResNet](https://paperswithcode.com/method/resnet) as a baseline for the encoder $\mathcal{E}$ followed by a 4-layer MLP with skip connections, but they experimented with RevNets and found they outperformed with increased network width, so opted for this type of encoder for the final architecture. |
Given the following machine learning model name: BLANC, provide a description of the model | **BLANC** is an automatic estimation approach for document summary quality. The goal is to measure the functional performance of a summary with an objective, reproducible, and fully automated method. BLANC achieves this by measuring the performance boost gained by a pre-trained language model with access to a document summary while carrying out its language understanding task on the document's text. |
Given the following machine learning model name: Cascade Mask R-CNN, provide a description of the model | **Cascade Mask R-CNN** extends [Cascade R-CNN](https://paperswithcode.com/method/cascade-r-cnn) to instance segmentation, by adding a
mask head to the cascade.
In the [Mask R-CNN](https://paperswithcode.com/method/mask-r-cnn), the segmentation branch is inserted in parallel to the detection branch. However, the Cascade [R-CNN](https://paperswithcode.com/method/r-cnn) has multiple detection branches. This raises the questions of 1) where to add the segmentation branch and 2) how many segmentation branches to add. The authors consider three strategies for mask prediction in the Cascade R-CNN. The first two strategies address the first question, adding a single mask prediction head at either the first or last stage of the Cascade R-CNN. Since the instances used to train the segmentation branch are the positives of the detection branch, their number varies in these two strategies. Placing the segmentation head later on the cascade leads to more examples. However, because segmentation is a pixel-wise operation, a large number of highly overlapping instances is not necessarily as helpful as for object detection, which is a patch-based operation. The third strategy addresses the second question, adding a segmentation branch to each
cascade stage. This maximizes the diversity of samples used to learn the mask prediction task.
At inference time, all three strategies predict the segmentation masks on the patches produced by the final object detection stage, irrespective of the cascade stage on which the segmentation mask is implemented and how many segmentation branches there are. |
Given the following machine learning model name: VATT, provide a description of the model | **Video-Audio-Text Transformer**, or **VATT**, is a framework for learning multimodal representations from unlabeled data using [convolution](https://paperswithcode.com/method/convolution)-free [Transformer](https://paperswithcode.com/method/transformer) architectures. Specifically, it takes raw signals as inputs and extracts multidimensional representations that are rich enough to benefit a variety of downstream tasks. VATT borrows the exact architecture from [BERT](https://paperswithcode.com/method/bert) and [ViT](https://paperswithcode.com/method/vision-transformer) except the layer of tokenization and linear projection reserved for each modality separately. The design follows the same spirit as ViT that makes the minimal changes to the architecture so that the learned model can transfer its weights to various frameworks and tasks.
VATT linearly projects each modality into a feature vector and feeds it into a Transformer encoder. A semantically hierarchical common space is defined to account for the granularity of different modalities and noise contrastive estimation is employed to train the model. |
Given the following machine learning model name: SRGAN, provide a description of the model | **SRGAN** is a generative adversarial network for single image super-resolution. It uses a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, the authors use a content loss motivated by perceptual similarity instead of similarity in pixel space. The actual networks - depicted in the Figure to the right - consist mainly of residual blocks for feature extraction.
Formally we write the perceptual loss function as a weighted sum of a ([VGG](https://paperswithcode.com/method/vgg)) content loss $l^{SR}\_{X}$ and an adversarial loss component $l^{SR}\_{Gen}$:
$$ l^{SR} = l^{SR}\_{X} + 10^{-3}l^{SR}\_{Gen} $$ |
Given the following machine learning model name: HRank, provide a description of the model | **HRank** is a filter pruning method that explores the High Rank of the feature map in each layer (HRank). The proposed HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, the authors develop a method that is mathematically formulated to prune filters with low-rank feature maps. |
Given the following machine learning model name: Global Coupled Adaptive Number of Shots, provide a description of the model | **gCANS**, or **Global Coupled Adaptive Number of Shots**, is a variational quantum algorithm for stochastic gradient descent. It adaptively allocates shots for the measurement of each gradient component at each iteration. The optimizer uses a criterion for allocating shots that incorporates information about the overall scale of the shot cost for the iteration. |
Given the following machine learning model name: Differential attention for visual question answering, provide a description of the model | In this paper we aim to answer questions based on images when provided with a dataset of question-answer pairs for a number of images during training. A number of methods have focused on solving this problem by using image based attention. This is done by focusing on a specific part of the image while answering the question. Humans also do so when solving this problem. However, the regions that the previous systems focus on are not correlated with the regions that humans focus on. The accuracy is limited due to this drawback. In this paper, we propose to solve this problem by using an exemplar based method. We obtain one or more supporting and opposing exemplars to obtain a differential attention region. This differential attention is closer to human attention than other image based attention methods. It also helps in obtaining improved accuracy when answering questions. The method is evaluated on challenging benchmark datasets. We perform better than other image based attention methods and are competitive with other state of the art methods that focus on both image and questions. |
Given the following machine learning model name: Subformer, provide a description of the model | **Subformer** is a [Transformer](https://paperswithcode.com/method/transformer) that combines sandwich-style parameter sharing, which overcomes naive cross-layer parameter sharing in generative models, and self-attentive embedding factorization (SAFE). In SAFE, a small self-attention layer is used to reduce embedding parameter count. |
Given the following machine learning model name: Byte Pair Encoding, provide a description of the model | **Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).
[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works. |
Given the following machine learning model name: Kollen-Pollack Learning, provide a description of the model | |
Given the following machine learning model name: Track objects as points, provide a description of the model | Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame. Given this minimal input, CenterTrack localizes objects and predicts their associations with the previous frame. That's it. CenterTrack is simple, online (no peeking into the future), and real-time. |
Given the following machine learning model name: TURL: Table Understanding through Representation Learning, provide a description of the model | Relational tables on the Web store a vast amount of knowledge. Owing to the wealth of such tables, there has been tremendous progress on a variety of tasks in the area of table understanding. However, existing work generally relies on heavily-engineered task- specific features and model architectures. In this paper, we present TURL, a novel framework that introduces the pre-training/fine- tuning paradigm to relational Web tables. During pre-training, our framework learns deep contextualized representations on relational tables in an unsupervised manner. Its universal model design with pre-trained representations can be applied to a wide range of tasks with minimal task-specific fine-tuning.
Specifically, we propose a structure-aware Transformer encoder to model the row-column structure of relational tables, and present a new Masked Entity Recovery (MER) objective for pre-training to capture the semantics and knowledge in large-scale unlabeled data. We systematically evaluate TURL with a benchmark consisting of 6 different tasks for table understanding (e.g., relation extraction, cell filling). We show that TURL generalizes well to all tasks and substantially outperforms existing methods in almost all instances. |
Given the following machine learning model name: MODNet, provide a description of the model | **MODNet** is a light-weight matting objective decomposition network that can process portrait matting from a single input image in real time. The design of MODNet benefits from optimizing a series of correlated sub-objectives simultaneously via explicit constraints. To overcome the domain shift problem, MODNet introduces a self-supervised strategy based on subobjective consistency (SOC) and a one-frame delay trick to smooth the results when applying MODNet to portrait video sequence.
Given an input image $I$, MODNet predicts human semantics $s\_{p}$, boundary details $d\_{p}$, and final alpha matte $\alpha\_{p}$ through three interdependent branches, $S, D$, and $F$, which are constrained by specific supervisions generated from the ground truth matte $\alpha\_{g}$. Since the decomposed sub-objectives are correlated and help strengthen each other, we can optimize MODNet end-to-end. |
Given the following machine learning model name: TopK Copy, provide a description of the model | **TopK Copy** is a cross-attention guided copy mechanism for entity extraction where only the Top-$k$ important attention heads are used for computing copy distributions. The motivation is that that attention heads may not equally important, and that some heads can be pruned out with a marginal decrease in overall performance. Attention probabilities produced by insignificant attention heads may be noisy. Thus, computing copy distributions without these heads could improve the model’s ability to infer the importance of each token in the input document. |
Given the following machine learning model name: Streaming Module, provide a description of the model | |
Given the following machine learning model name: Instruction Pointer Attention Graph Neural Network, provide a description of the model | **Instruction Pointer Attention Graph Neural Network**, or **IPA-GNN**, is a learning-interpreter neural network (LNN) based on GNNs for learning to execute programmes. It achieves improved systematic generalization on the task of learning to execute programs using control flow graphs. The model arises by considering RNNs operating on program traces with branch decisions as latent variables. The IPA-GNN can be seen either as a continuous relaxation of the RNN model or as a GNN variant more tailored to execution. |
Given the following machine learning model name: Composed Video Retrieval, provide a description of the model | The composed video retrieval (CoVR) task is a new task, where the goal is to find a video that matches both a query image and a query text. The query image represents a visual concept that the user is interested in, and the query text specifies how the concept should be modified or refined. For example, given an image of a fountain and the text _during show at night_, the CoVR task is to retrieve a video that shows the fountain at night with a show. |
Given the following machine learning model name: Rank-based loss, provide a description of the model | |
Given the following machine learning model name: Normalized Linear Combination of Activations, provide a description of the model | The **Normalized Linear Combination of Activations**, or **NormLinComb**, is a type of activation function that has trainable parameters and uses the normalized linear combination of other activation functions.
$$NormLinComb(x) = \frac{\sum\limits_{i=0}^{n} w_i \mathcal{F}_i(x)}{\mid \mid W \mid \mid}$$ |
Given the following machine learning model name: Conditional Positional Encoding, provide a description of the model | **Conditional Positional Encoding**, or **CPE**, is a type of positional encoding for [vision transformers](https://paperswithcode.com/methods/category/vision-transformer). Unlike previous fixed or learnable positional encodings, which are predefined and independent of input tokens, CPE is dynamically generated and conditioned on the local neighborhood of the input tokens. As a result, CPE aims to generalize to the input sequences that are longer than what the model has ever seen during training. CPE can also keep the desired translation-invariance in the image classification task. CPE can be implemented with a [Position
Encoding Generator](https://paperswithcode.com/method/positional-encoding-generator) (PEG) and incorporated into the current [Transformer framework](https://paperswithcode.com/methods/category/transformers). |
Given the following machine learning model name: Adaptive Bezier-Curve Network, provide a description of the model | **Adaptive Bezier-Curve Network**, or **ABCNet**, is an end-to-end framework for arbitrarily-shaped scene text spotting. It adaptively fits arbitrary-shaped text by a parameterized bezier curve. It also utilizes a feature alignment layer, [BezierAlign](https://paperswithcode.com/method/bezieralign), to calculate convolutional features of text instances in curved shapes. These features are then passed to a light-weight recognition head. |
Given the following machine learning model name: Continuous Kernel Convolution, provide a description of the model | |
Given the following machine learning model name: OSCAR, provide a description of the model | OSCAR is a new learning method that uses object tags detected in images as anchor points to ease the learning of image-text alignment. The model take a triple as input (word-tag-region) and pre-trained with two losses (masked token loss over words and tags, and a contrastive loss between tags and others). OSCAR represents an image-text pair into semantic space via dictionary lookup. Object tags are used as anchor points to align image regions with word embeddings of pre-trained language models. The model is then fine-tuned for understanding and generation tasks. |
Given the following machine learning model name: Vision-and-Language BERT, provide a description of the model | **Vision-and-Language BERT** (**ViLBERT**) is a [BERT](https://paperswithcode.com/method/bert)-based model for learning task-agnostic joint representations of image content and natural language. ViLBERT extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional [transformer](https://paperswithcode.com/method/transformer) layers. |
Given the following machine learning model name: PnP, provide a description of the model | **PnP**, or **Poll and Pool**, is sampling module extension for [DETR](https://paperswithcode.com/method/detr)-type architectures that adaptively allocates its computation spatially to be more efficient. Concretely, the PnP module abstracts the image feature map into fine foreground object feature vectors and a small number of coarse background contextual feature vectors. The [transformer](https://paperswithcode.com/method/transformer) models information interaction within the fine-coarse feature space and translates the features into the detection result. |
Given the following machine learning model name: Branch attention, provide a description of the model | Branch attention can be seen as a dynamic branch selection mechanism: which to pay attention to, used with a multi-branch structure. |
Given the following machine learning model name: Matrix-power Normalization, provide a description of the model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.