prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: A Framework for Leader Identification in Coordinated Activity, provide a description of the model | An agreement of a group to follow a common purpose is manifested by its coalescence into a coordinated behavior. The process of initiating this behavior and the period of decision-making by the group members necessarily precedes the coordinated behavior. Given time series of group members’ behavior, the goal is to find these periods of decision-making and identify the initiating individual, if one exists.
Image Source: [Amornbunchornvej et al.](https://arxiv.org/pdf/1603.01570v2.pdf) |
Given the following machine learning model name: Area Under the ROC Curve for Clustering, provide a description of the model | The area under the receiver operating characteristics (ROC) Curve, referred to as AUC, is a well-known performance measure in the supervised learning domain. Due to its compelling features, it has been employed in a number of studies to evaluate and compare the performance of different classifiers. In this work, we explore AUC as a performance measure in the unsupervised learning domain, more specifically, in the context of cluster analysis. In particular, we elaborate on the use of AUC as an internal/relative measure of clustering quality, which we refer to as Area Under the Curve for Clustering (AUCC). We show that the AUCC of a given candidate clustering solution has an expected value under a null model of random clustering solutions, regardless of the size of the dataset and, more importantly, regardless of the number or the (im)balance of clusters under evaluation. In addition, we elaborate on the fact that, in the context of internal/relative clustering validation as we consider, AUCC is actually a linear transformation of the Gamma criterion from Baker and Hubert (1975), for which we also formally derive a theoretical expected value for chance clusterings. We also discuss the computational complexity of these criteria and show that, while an ordinary implementation of Gamma can be computationally prohibitive and impractical for most real applications of cluster analysis, its equivalence with AUCC actually unveils a much more efficient algorithmic procedure. Our theoretical findings are supported by experimental results. These results show that, in addition to an effective and robust quantitative evaluation provided by AUCC, visual inspection of the ROC curves themselves can be useful to further assess a candidate clustering solution from a broader, qualitative perspective as well. |
Given the following machine learning model name: Siamese Multi-depth Transformer-based Hierarchical Encoder, provide a description of the model | **SMITH**, or **Siamese Multi-depth Transformer-based Hierarchical Encoder**, is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based model for document representation learning and matching. It contains several design choices to adapt [self-attention models](https://paperswithcode.com/methods/category/attention-modules) for long text inputs. For the model pre-training, a masked sentence block language modeling task is used in addition to the original masked word language model task used in [BERT](https://paperswithcode.com/method/bert), to capture sentence block relations within a document. Given a sequence of sentence block representation, the document level Transformers learn the contextual representation for each sentence block and the final document representation. |
Given the following machine learning model name: DIoU-NMS, provide a description of the model | **DIoU-NMS** is a type of non-maximum suppression where we use Distance IoU rather than regular DIoU, in which the overlap area and the distance between two central points of bounding boxes are simultaneously considered when suppressing redundant boxes.
In original NMS, the IoU metric is used to suppress the redundant detection boxes, where the overlap area is the unique factor, often yielding false suppression for the cases with occlusion. With DIoU-NMS, we not only consider the overlap area but also central point distance between two boxes. |
Given the following machine learning model name: Dual Softmax Loss, provide a description of the model | **Dual Softmax Loss** is a loss function based on symmetric cross-entropy loss used in the [CAMoE](https://paperswithcode.com/method/camoe) video-text retrieval model. Every text and video are calculated the
similarity with other videos or texts, which should be maximum in terms of the ground truth pair. For DSL, a prior is introduced to revise the similarity score. Multiplying the prior with the original similarity matrix imposes an efficient constraint and can help to filter those single side match pairs. As a result, DSL highlights the one with both great Text-to-Video and Video-to-Text probability, conducting a more convincing result. |
Given the following machine learning model name: ComplEx with N3 Regularizer, provide a description of the model | ComplEx model trained with a nuclear norm regularizer |
Given the following machine learning model name: Graph Recurrent Imputation Network, provide a description of the model | |
Given the following machine learning model name: Local Relation Layer, provide a description of the model | A **Local Relation Layer** is an image feature extractor that is an alternative to a [convolution](https://paperswithcode.com/method/convolution) operator. The intuition is that aggregation in convolution is basically a pattern matching process that applies fixed filters, which can be inefficient at modeling visual elements with varying spatial distributions. The local relation layer adaptively determines aggregation weights based on the compositional relationship of local pixel pairs. It is argued that, with this relational approach, it can composite visual elements into higher-level entities in a more efficient manner that benefits semantic inference. |
Given the following machine learning model name: XGrad-CAM, provide a description of the model | **XGrad-CAM**, or **Axiom-based Grad-CAM**, is a class-discriminative visualization method and able to highlight the regions belonging to the objects of interest. Two axiomatic properties are introduced in the derivation of XGrad-CAM: Sensitivity and Conservation. In particular, the proposed XGrad-CAM is still a linear combination of feature maps, but able to meet the constraints of those two axioms. |
Given the following machine learning model name: AutoTinyBERT, provide a description of the model | **AutoTinyBERT** is a an efficient [BERT](https://paperswithcode.com/method/bert) variant found through neural architecture search. Specifically, one-shot learning is used to obtain a big Super Pretrained Language Model (SuperPLM), where the objectives of pre-training or task-agnostic BERT distillation are used. Then, given a specific latency constraint, an evolutionary algorithm is run on the SuperPLM to search optimal architectures. Finally, we extract the corresponding sub-models based on the optimal architectures and further train these models. |
Given the following machine learning model name: Hunger Games Search, provide a description of the model | **Hunger Games Search (**HGS**)** is a general-purpose population-based optimization technique with a simple structure, special stability features and very competitive performance to realize the solutions of both constrained and unconstrained problems more effectively. HGS is designed according to the hunger-driven activities and behavioural choice of animals. This dynamic, fitness-wise search method follows a simple concept of “Hunger” as the most crucial homeostatic motivation and reason for behaviours, decisions, and actions in the life of all animals to make the process of optimization more understandable and consistent for new users and decision-makers. The Hunger Games Search incorporates the concept of hunger into the feature process; in other words, an adaptive weight based on the concept of hunger is designed and employed to simulate the effect of hunger on each search step. It follows the computationally logical rules (games) utilized by almost all animals and these rival activities and games are often adaptive evolutionary by securing higher chances of survival and food acquisition. This method's main feature is its dynamic nature, simple structure, and high performance in terms of convergence and acceptable quality of solutions, proving to be more efficient than the current optimization methods.
Implementation of the HGS algorithm is available at [https://aliasgharheidari.com/HGS.html](https://aliasgharheidari.com/HGS.html). |
Given the following machine learning model name: Sequence to Sequence, provide a description of the model | **Seq2Seq**, or **Sequence To Sequence**, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one [LSTM](https://paperswithcode.com/method/lstm), the *encoder*, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the *decoder*, to extract the output sequence
from that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.
(Note that this page refers to the original seq2seq not general sequence-to-sequence models) |
Given the following machine learning model name: Bort, provide a description of the model | **Bort** is a parametric architectural variant of the [BERT](https://paperswithcode.com/method/bert) architecture. It extracts an optimal subset of architectural parameters for the BERT architecture through a [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) approach; in particular, a fully polynomial-time approximation scheme (FPTAS). This optimal subset - “Bort” - is demonstrably smaller, having an effective size of $5.5 \%$ the original BERT-large architecture, and $16\%$ of the net size. Bort is also able to be pretrained in $288$ GPU hours, which is $1.2\%$ less than the time required to pretrain the highest-performing BERT parametric architecture variant, RoBERTa-large ([RoBERTa](https://paperswithcode.com/method/roberta)), and about $33\% |
Given the following machine learning model name: Electric, provide a description of the model | **Electric** is an energy-based cloze model for representation learning over text. Like BERT, it is a conditional generative model of tokens given their contexts. However, Electric does not use masking or output a full distribution over tokens that could occur in a context. Instead, it assigns a scalar energy score to each input token indicating how likely it is given its context.
Specifically, like BERT, Electric also models $p\_{\text {data }}\left(x\_{t} \mid \mathbf{x}\_{\backslash t}\right)$, but does not use masking or a softmax layer. Electric first maps the unmasked input $\mathbf{x}=\left[x\_{1}, \ldots, x\_{n}\right]$ into contextualized vector representations $\mathbf{h}(\mathbf{x})=\left[\mathbf{h}\_{1}, \ldots, \mathbf{h}\_{n}\right]$ using a transformer network. The model assigns a given position $t$ an energy score
$$
E(\mathbf{x})\_{t}=\mathbf{w}^{T} \mathbf{h}(\mathbf{x})\_{t}
$$
using a learned weight vector $w$. The energy function defines a distribution over the possible tokens at position $t$ as
$$
p\_{\theta}\left(x\_{t} \mid \mathbf{x}_{\backslash t}\right)=\exp \left(-E(\mathbf{x})\_{t}\right) / Z\left(\mathbf{x}\_{\backslash t}\right)
$$
$$
=\frac{\exp \left(-E(\mathbf{x})\_{t}\right)}{\sum\_{x^{\prime} \in \mathcal{V}} \exp \left(-E\left(\operatorname{REPLACE}\left(\mathbf{x}, t, x^{\prime}\right)\right)\_{t}\right)}
$$
where $\text{REPLACE}\left(\mathbf{x}, t, x^{\prime}\right)$ denotes replacing the token at position $t$ with $x^{\prime}$ and $\mathcal{V}$ is the vocabulary, in practice usually word pieces. Unlike with BERT, which produces the probabilities for all possible tokens $x^{\prime}$ using a softmax layer, a candidate $x^{\prime}$ is passed in as input to the transformer. As a result, computing $p_{\theta}$ is prohibitively expensive because the partition function $Z\_{\theta}\left(\mathbf{x}\_{\backslash t}\right)$ requires running the transformer $|\mathcal{V}|$ times; unlike most EBMs, the intractability of $Z\_{\theta}(\mathbf{x} \backslash t)$ is more due to the expensive scoring function rather than having a large sample space. |
Given the following machine learning model name: OPT-IML, provide a description of the model | **OPT-IML** is a version of OPT fine-tuned on a large collection of 1500+ NLP tasks divided into various task categories. |
Given the following machine learning model name: Low-resolution input, provide a description of the model | |
Given the following machine learning model name: Partition Filter Network, provide a description of the model | **Partition Filter Network** is a framework designed specifically for joint entity and relation extraction. The framework consists of three components: partition filter encoder, NER unit and RE unit. In task units, we use table-filling for word pair prediction. Orange, yellow and green represents NER-related, shared and RE-related component or features. (b) Detailed depiction of partition filter encoder in one single time step. We decompose feature encoding into two steps: partition and filter (shown in the gray area). In partition, we first segment neurons into two task partitions and one shared partition. Then in filter, partitions are selected and combined to form task-specific features and shared features, filtering out information irrelevant to each task. |
Given the following machine learning model name: Constrained Pairwise k-Means, provide a description of the model | COP-KMeans is a modified version the popular k-means algorithm that supports pairwise constraints.
Original paper : Constrained K-means Clustering with Background Knowledge, Wagstaff et al. 2001 |
Given the following machine learning model name: Grab, provide a description of the model | **Grab** is a sensor processing system for cashier-free shopping. Grab needs to accurately identify and track customers, and associate each shopper with items he or she retrieves from shelves. To do this, it uses a keypoint-based pose tracker as a building block for identification and tracking, develops robust feature-based face trackers, and algorithms for associating and tracking arm movements. It also uses a probabilistic framework to fuse readings from camera, weight and RFID sensors in order to accurately assess which shopper picks up which item. |
Given the following machine learning model name: DetNASNet, provide a description of the model | **DetNASNet** is a convolutional neural network designed to be an object detection backbone and discovered through [DetNAS](https://paperswithcode.com/method/detnas) architecture search. It uses [ShuffleNet V2](https://paperswithcode.com/method/shufflenet-v2) blocks as its basic building block. |
Given the following machine learning model name: Extended Transformer Construction, provide a description of the model | **Extended Transformer Construction**, or **ETC**, is an extension of the [Transformer](https://paperswithcode.com/method/transformer) architecture with a new attention mechanism that extends the original in two main ways: (1) it allows scaling up the input length from 512 to several thousands; and (2) it can ingesting structured inputs instead of just linear sequences. The key ideas that enable ETC to achieve these are a new [global-local attention mechanism](https://paperswithcode.com/method/global-local-attention), coupled with [relative position encodings](https://paperswithcode.com/method/relative-position-encodings). ETC also allows lifting weights from existing [BERT](https://paperswithcode.com/method/bert) models, saving computational resources while training. |
Given the following machine learning model name: Content-based Attention, provide a description of the model | **Content-based attention** is an attention mechanism based on cosine similarity:
$$f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right) = \cos\left[\textbf{h}\_{i};\textbf{s}\_{j}\right] $$
It was utilised in [Neural Turing Machines](https://paperswithcode.com/method/neural-turing-machine) as part of the Addressing Mechanism.
We produce a normalized attention weighting by taking a [softmax](https://paperswithcode.com/method/softmax) over these attention alignment scores. |
Given the following machine learning model name: SpreadsheetCoder, provide a description of the model | **SpreadsheetCoder** is a neural network architecture for spreadsheet formula prediction. It is a [BERT](https://paperswithcode.com/method/bert)-based model architecture to represent the tabular context in both row-based and column-based formats. A [BERT](https://paperswithcode.com/method/bert) encoder computes an embedding vector for each input token, incorporating the contextual information from nearby rows and columns. The BERT encoder is initialized from the weights pre-trained on English text corpora, which is beneficial for encoding table headers. To handle cell references, a two-stage decoding process is used inspired by sketch learning for program synthesis. The decoder first generates a formula sketch, which does not include concrete cell references, and then predicts the corresponding cell ranges to generate the complete formula |
Given the following machine learning model name: R(2+1)D, provide a description of the model | A **R(2+1)D** convolutional neural network is a network for action recognition that employs [R(2+1)D](https://paperswithcode.com/method/2-1-d-convolution) convolutions in a [ResNet](https://paperswithcode.com/method/resnet) inspired architecture. The use of these convolutions over regular [3D Convolutions](https://paperswithcode.com/method/3d-convolution) reduces computational complexity, prevents overfitting, and introduces more non-linearities that allow for a better functional relationship to be modeled. |
Given the following machine learning model name: Weight Demodulation, provide a description of the model | **Weight Modulation** is an alternative to [adaptive instance normalization](https://paperswithcode.com/method/adaptive-instance-normalization) for use in generative adversarial networks, specifically it is introduced in [StyleGAN2](https://paperswithcode.com/method/stylegan2). The purpose of [instance normalization](https://paperswithcode.com/method/instance-normalization) is to remove the effect of $s$ - the scales of the features maps - from the statistics of the [convolution](https://paperswithcode.com/method/convolution)’s output feature maps. Weight modulation tries to achieve this goal more directly. Assuming that input activations are i.i.d. random variables with unit standard deviation. After modulation and convolution, the output activations have standard deviation of:
$$ \sigma\_{j} = \sqrt{{\sum\_{i,k}w\_{ijk}'}^{2}} $$
i.e., the outputs are scaled by the $L\_{2}$ norm of the corresponding weights. The subsequent normalization aims to restore the outputs back to unit standard deviation. This can be achieved if we scale (“demodulate”) each output feature map $j$ by $1/\sigma\_{j}$ . Alternatively, we can again bake this into the convolution weights:
$$ w''\_{ijk} = w'\_{ijk} / \sqrt{{\sum\_{i, k}w'\_{ijk}}^{2} + \epsilon} $$
where $\epsilon$ is a small constant to avoid numerical issues. |
Given the following machine learning model name: PanGu-$α$, provide a description of the model | **PanGu-$α$** is an autoregressive language model (ALM) with up to 200 billion parameters pretrained on a large corpus of text, mostly in Chinese language. The architecture of PanGu-$α$ is based on Transformer, which has been extensively used as the backbone of a variety of pretrained language models such as [BERT](https://paperswithcode.com/method/bert) and [GPT](https://paperswithcode.com/method/gpt). Different from them, there's an additional query layer developed on top of Transformer layers which aims to explicitly induce the expected output. |
Given the following machine learning model name: Adapter, provide a description of the model | |
Given the following machine learning model name: ExtremeNet, provide a description of the model | **ExtremeNet** is a a bottom-up object detection framework that detects four extreme points (top-most, left-most, bottom-most, right-most) of an object. It uses a keypoint estimation framework to find extreme points, by predicting four multi-peak heatmaps for each object category. In addition, it uses one [heatmap](https://paperswithcode.com/method/heatmap) per category predicting the object center, as the average of two bounding box edges in both the x and y dimension. We group extreme points into objects with a purely geometry-based approach. We group four extreme points, one from each map, if and only if their
geometric center is predicted in the center heatmap with a score higher than a pre-defined threshold, We enumerate all $O\left(n^{4}\right)$ combinations of extreme point prediction, and select the valid ones. |
Given the following machine learning model name: BigGAN-deep, provide a description of the model | **BigGAN-deep** is a deeper version (4x) of [BigGAN](https://paperswithcode.com/method/biggan). The main difference is a slightly differently designed [residual block](https://paperswithcode.com/method/residual-block). Here the $z$ vector is concatenated with the conditional vector without splitting it into chunks. It is also based on residual blocks with bottlenecks. BigGAN-deep uses a different strategy than BigGAN aimed at preserving identity throughout the skip connections. In G, where the number of channels needs to be reduced, BigGAN-deep simply retains the first group of channels and drop the rest to produce the required number of channels. In D, where the number of channels should be increased, BigGAN-deep passes the input channels unperturbed, and concatenates them with the remaining channels produced by a 1 × 1 [convolution](https://paperswithcode.com/method/convolution). As far as the
network configuration is concerned, the discriminator is an exact reflection of the generator.
There are two blocks at each resolution (BigGAN uses one), and as a result BigGAN-deep is four times
deeper than BigGAN. Despite their increased depth, the BigGAN-deep models have significantly
fewer parameters mainly due to the bottleneck structure of their residual blocks. |
Given the following machine learning model name: Legendre Memory Unit, provide a description of the model | The Legendre Memory Unit (LMU) is mathematically derived to orthogonalize
its continuous-time history – doing so by solving d coupled ordinary differential
equations (ODEs), whose phase space linearly maps onto sliding windows of
time via the Legendre polynomials up to degree d-1. It is optimal for compressing temporal information.
See paper for equations (markdown isn't working).
Official github repo: [https://github.com/abr/lmu](https://github.com/abr/lmu) |
Given the following machine learning model name: Deep Ensembles, provide a description of the model | |
Given the following machine learning model name: Smooth Step, provide a description of the model | Please enter a description about the method here |
Given the following machine learning model name: CSPResNeXt Block, provide a description of the model | **CSPResNeXt Block** is an extended [ResNext Block](https://paperswithcode.com/method/resnext-block) where we partition the feature map of the base layer into two parts and then merge them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. |
Given the following machine learning model name: FBNet Block, provide a description of the model | **FBNet Block** is an image model block used in the [FBNet](https://paperswithcode.com/method/fbnet) architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). The basic building blocks employed are [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution) and a [residual connection](https://paperswithcode.com/method/residual-connection). |
Given the following machine learning model name: Domain Adaptative Neighborhood Clustering via Entropy Optimization, provide a description of the model | **Domain Adaptive Neighborhood Clustering via Entropy Optimization (DANCE)** is a self-supervised clustering method that harnesses the cluster structure of the target domain using self-supervision. This is done with a neighborhood clustering technique that self-supervises feature learning in the target. At the same time, useful source features and class boundaries are preserved and adapted with a partial domain alignment loss that the authors refer to as entropy separation loss. This loss allows the model to either match each target example with the source, or reject it as unknown. |
Given the following machine learning model name: Local Contrast Normalization, provide a description of the model | **Local Contrast Normalization** is a type of normalization that performs local subtraction and division normalizations, enforcing a sort of local competition between adjacent features in a feature map, and between features at the same spatial location in different feature maps. |
Given the following machine learning model name: Pansharpening Network, provide a description of the model | We propose a deep network architecture for the pansharpening problem called PanNet. We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation. For spectral preservation, we add up-sampled multispectral images to the network output, which directly propagates the spectral information to the reconstructed image. To preserve the spatial structure, we train our network parameters in the high-pass filtering domain rather than the image domain. We show that the trained network generalizes well to images from different satellites without needing retraining. Experiments show significant improvement over state-of-the-art methods visually and in terms of standard quality metrics. |
Given the following machine learning model name: Online Hard Example Mining, provide a description of the model | Some object detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more
effective and efficient. **OHEM**, or **Online Hard Example Mining**, is a bootstrapping technique that modifies [SGD](https://paperswithcode.com/method/sgd) to sample from examples in a non-uniform way depending on the current loss of each example under consideration. The method takes advantage of detection-specific problem structure in which each SGD mini-batch consists of only one or two images, but thousands of candidate examples. The candidate examples are subsampled according to a distribution
that favors diverse, high loss instances. |
Given the following machine learning model name: Stochastic Steady-state Embedding, provide a description of the model | Stochastic Steady-state Embedding (SSE) is an algorithm that can learn many steady-state algorithms over graphs. Different from graph neural network family models, SSE is trained stochastically which only requires 1-hop information, but can capture fixed point relationships efficiently and effectively.
Description and Image from: [Learning Steady-States of Iterative Algorithms over Graphs](https://proceedings.mlr.press/v80/dai18a.html) |
Given the following machine learning model name: Hybrid-deconvolution, provide a description of the model | A resnet-like architecture with deconvolution feature normalization (Ye et al. 2020, ICLR) layers in the first few layers for sparse low-level feature identification, and batch normalization layers in the later layers. |
Given the following machine learning model name: Channel-wise Cross Attention, provide a description of the model | **Channel-wise Cross Attention** is a module for semantic segmentation used in the [UCTransNet](https://paperswithcode.com/method/uctransnet) architecture. It is used to fuse features of inconsistent semantics between the Channel [Transformer](https://paperswithcode.com/method/transformer) and [U-Net](https://paperswithcode.com/method/u-net) decoder. It guides the channel and information filtration of the Transformer features and eliminates the ambiguity with the decoder features.
Mathematically, we take the $i$-th level Transformer output $\mathbf{O\_{i}} \in \mathbb{R}^{C×H×W}$ and i-th level decoder feature map $\mathbf{D\_{i}} \in \mathbb{R}^{C×H×W}$ as the inputs of Channel-wise Cross Attention. Spatial squeeze is performed by a [global average pooling](https://paperswithcode.com/method/global-average-pooling) (GAP) layer, producing vector $\mathcal{G}\left(\mathbf{X}\right) \in \mathbb{R}^{C×1×1}$ with its $k$th channel $\mathcal{G}\left(\mathbf{X}\right) = \frac{1}{H×W}\sum^{H}\_{i=1}\sum^{W}\_{j=1}\mathbf{X}^{k}\left(i, j\right)$. We use this operation to embed the global spatial information and then generate the attention mask:
$$ \mathbf{M}\_{i} = \mathbf{L}\_{1} \cdot \mathcal{G}\left(\mathbf{O\_{i}}\right) + \mathbf{L}\_{2} \cdot \mathcal{G}\left(\mathbf{D}\_{i}\right) $$
where $\mathbf{L}\_{1} \in \mathbb{R}^{C×C}$ and $\mathbf{L}\_{2} \in \mathbb{R}^{C×C}$ and being weights of two Linear layers and the [ReLU](https://paperswithcode.com/method/relu) operator $\delta\left(\cdot\right)$. This operation in the equation above encodes the channel-wise dependencies. Following [ECA-Net](https://paperswithcode.com/method/eca-net) which empirically showed avoiding dimensionality reduction is important for learning channel attention, the authors use a single [Linear layer](https://paperswithcode.com/method/linear-layer) and sigmoid function to build the channel attention map. The resultant vector is used to recalibrate or excite $\mathbf{O\_{i}}$ to $\mathbf{\bar{O}\_{i}} = \sigma\left(\mathbf{M\_{i}}\right) \cdot \mathbf{O\_{i}}$, where the activation $\sigma\left(\mathbf{M\_{i}}\right)$ indicates the importance of each channel. Finally, the masked $\mathbf{\bar{O}}\_{i}$ is concatenated with the up-sampled features of the $i$-th level decoder. |
Given the following machine learning model name: RotNet, provide a description of the model | **RotNet** is a self-supervision approach that relies on predicting image rotations as the pretext task
in order to learn image representations. |
Given the following machine learning model name: Weighted Recurrent Quality Enhancement, provide a description of the model | **Weighted Recurrent Quality Enhancement**, or **WRQE**, is a recurrent quality enhancement network for video compression that takes both compressed frames and the bit stream as inputs. In the recurrent cell of WRQE, the memory and update signal are weighted by quality features to reasonably leverage multi-frame information for enhancement. |
Given the following machine learning model name: Encoder-Attender-Aggregator, provide a description of the model | EncAttAgg introduced two attenders to tackle two problems: 1) We introduce a mutual attender layer to efficiently obtain the entity-pair-specific mention representations. 2) We introduce an integration attender to weight mention pairs of a target entity pair. |
Given the following machine learning model name: Wavelet-integrated Identity Preserving Adversarial Network for face super-resolution, provide a description of the model | # WIPA: Wavelet-integrated, Identity Preserving, Adversarial network for Face Super-resolution
Pytorch implementation of WIPA: Super-resolution of very low-resolution face images with a **W**avelet Integrated, **I**dentity **P**reserving, **A**dversarial Network.
# Paper:
[Super-resolution of very low-resolution face images with a Wavelet Integrated, Identity Preserving, Adversarial Network](https://www.sciencedirect.com/science/article/abs/pii/S0923596522000753?dgcid=coauthor).
You can download the pre-proof version of the article [here](https://drive.google.com/file/d/1GHWiCcScPF1PK4xozoRf-88Rytom-kvl/view?usp=sharing) but please refer to the origital manuscript for citation.
## Citation
If you find this work useful for your research, please consider citing our paper:
```
@article{DASTMALCHI2022116755,
title = {Super-resolution of very low-resolution face images with a wavelet integrated, identity preserving, adversarial network},
journal = {Signal Processing: Image Communication},
volume = {107},
pages = {116755},
year = {2022},
issn = {0923-5965},
doi = {https://doi.org/10.1016/j.image.2022.116755},
url = {https://www.sciencedirect.com/science/article/pii/S0923596522000753},
author = {Hamidreza Dastmalchi and Hassan Aghaeinia},
keywords = {Super-resolution, Wavelet prediction, Generative Adversarial Networks, Face Hallucination, Identity preserving, Perceptual quality},
```
## Linkdin Profile:
**Hamidreza Dastmalchi linkdin profile:**
https://www.linkedin.com/in/hamidreza-dastmalchi-80bb4574/
## WIPA Algorithm
we present **Wavelet
Prediction blocks** attached to a **Baseline CNN network** to predict wavelet missing details of facial images. The
extracted wavelet coefficients are concatenated with original feature maps in different scales to recover fine
details. Unlike other wavelet-based FH methods, this algorithm exploits the wavelet-enriched feature maps as
complementary information to facilitate the hallucination task. We introduce a **wavelet prediction loss** to push
the network to generate wavelet coefficients. In addition to the wavelet-domain cost function, a combination of
**perceptual**, **adversarial**, and **identity loss** functions has been utilized to achieve low-distortion and perceptually
high-quality images while maintaining identity. The training scheme of the Wavelet-Integrated network with the combination of five loss terms is shown as below:
<p align="center">
<img width="500" src="./block-diagram/WIPA-Training-Scheme.jpg">
</p>
## Datasets
The [Celebrity dataset](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) used for training the proposed FH algorithm. The database contains more than 200 K different face images under significant pose, illumination, and expression variations. In our experiment, two distinct groups of 20 thousand images are randomly selected from the CelebA dataset as our train and test dataset. In order to test the generalizing capacity of the method, we have further evaluated the performance of the proposed approach on [FW](http://vis-www.cs.umass.edu/lfw/) and [Helen dataset](http://www.ifp.illinois.edu/~vuongle2/helen/) too. All the testing and training images are roughly aligned using similarity transformation with landmarks detected by the well-known MTCNN network. The images are rescaled to the size of 128 × 128. The corresponding LR images are also constructed by down-sampling the HR images using bicubic interpolation. The experiments are accomplished in two different **scaling** factors of 8X and 16X with LR images of size 16 × 16 and 8 × 8, respectively.
**Before starting to train or test the network**, you must put the training images in the corresponding folders:
- Put training images in “.\data\train” directory.
- Put celeba test images in “.\data\test\celeba” , lfw test images in “.\data\test\lfw” and helen test images in “.\data\test\helen”.
## Pretrained Weights
The pretrained weights can be downloaded [here](https://drive.google.com/drive/folders/18V1kPDHW6F05L0xOOODNHZHO566SA6iC?usp=sharing).
## Code
The codes are consisted of two main files: the **main.py** file for training the network and the **test.py** file for evaluating the algorithm with different metrics like PSNR, SSIM and verification rate.
### Training
To train the network, simply run this code in Anaconda terminal:
```
>>python main.py
```
We designed different input arguments for controlling the training procedure. Please use --help command to see the available input arguments.
#### Example:
For example, to train the wavelet-integrated network through GPU with scale factor of 8, without having pre-trained model coefficients, with learning rate of 5e-5, you can simply run the following code in the terminal:
```
python main.py –scale 8 –wi_net “” –disc_net “” –wavelet_integrated True –lr 0.00005
```
### Testing
for evaluating (testing), simply run the following code in terminal:
```
>>python test.py
```
We have also developed different options as input arguments to control the testing procedure. You can evaluate psnr, ssim, fid score and also verification rate by the “test.py” file. To do this, you have to put the test images in the corresponding folders in data root at first.
#### Example:
For example, to evaluate the psnr and ssim of a wavelet-integrated pretrained model in scale of 8 and save the super-resolved results in folder of “./results/celeba”, you can write the following code in the command window:
```
>> test.py --wavelet_integrated True --scale 8 --wi_net gen_net_8x --save_flag True --save_folder ./results/celeba --metrics psnr ssim
```
To estimate the fid score, you have to produce the super-resolved test images first. Therefore, if you have not generated the super-resolved images, you have to call –metrics psnr ssim with fid simultaneously. You can also add the acc option to the metrics to evaluate the verification rate of the model:
```
>>python test.py --wavelet_integrated True --scale 8 --wi_net gen_net_8x --save_flag True --save_folder ./results/celeba --metrics psnr ssim fid acc
```
### Demo
In addition, we have developed a “demo.py” python file to demonstrate the results of some sample images in the “./sample_images/gt” directory. To run the demo file, simply write the following code in the terminal:
```
>>python demo.py
```
By default, the images of “./sample_images/gt” folder will be super-resolved by the wavelet-integrated network by a scale factor of 8 and the results will be saved in the “./sample_images/sr” folder. To change the scaling factor, one must alter not only the –scale option but also the corresponding –wi_net argument to import the relevant pretrained state dictionary. |
Given the following machine learning model name: Dynamic Convolution, provide a description of the model | **DynamicConv** is a type of [convolution](https://paperswithcode.com/method/convolution) for sequential modelling where it has kernels that vary over time as a learned function of the individual time steps. It builds upon [LightConv](https://paperswithcode.com/method/lightconv) and takes the same form but uses a time-step dependent kernel:
$$ \text{DynamicConv}\left(X, i, c\right) = \text{LightConv}\left(X, f\left(X\_{i}\right)\_{h,:}, i, c\right) $$ |
Given the following machine learning model name: Reliability Balancing, provide a description of the model | |
Given the following machine learning model name: Gated Positional Self-Attention, provide a description of the model | **Gated Positional Self-Attention (GPSA)** is a self-attention module for vision transformers, used in the [ConViT](https://paperswithcode.com/method/convit) architecture, that can be initialized as a convolutional layer -- helping a ViT learn inductive biases about locality. |
Given the following machine learning model name: Fast Feedforward Networks, provide a description of the model | A log-time alternative to feedforward layers outperforming both the vanilla feedforward and mixture-of-experts approaches. |
Given the following machine learning model name: Self-Adjusting Smooth L1 Loss, provide a description of the model | **Self-Adjusting Smooth L1 Loss** is a loss function used in object detection that was introduced with [RetinaMask](https://paperswithcode.com/method/retinamask). This is an improved version of Smooth L1. For Smooth L1 loss we have:
$$ f(x) = 0.5 \frac{x^{2}}{\beta} \text{ if } |x| < \beta $$
$$ f(x) = |x| -0.5\beta \text{ otherwise } $$
Here a point $\beta$ splits the positive axis range into two parts: $L2$ loss is used for targets in range $[0, \beta]$, and $L1$ loss is used beyond $\beta$ to avoid over-penalizing utliers. The overall function is smooth (continuous, together with its derivative). However, the choice of control point ($\beta$) is heuristic and is usually done by hyper parameter search.
Instead, with self-adjusting smooth L1 loss, inside the loss function the running mean and variance of the absolute loss are recorded. We use the running minibatch mean and variance with a momentum of $0.9$ to update these two parameters. |
Given the following machine learning model name: MixNet, provide a description of the model | **MixNet** is a type of convolutional neural network discovered via AutoML that utilises MixConvs instead of regular depthwise convolutions. |
Given the following machine learning model name: GPU-Efficient Network, provide a description of the model | **GENets**, or **GPU-Efficient Networks**, are a family of efficient models found through [neural architecture search](https://paperswithcode.com/methods/category/neural-architecture-search). The search occurs over several types of convolutional block, which include [depth-wise convolutions](https://paperswithcode.com/method/depthwise-convolution), [batch normalization](https://paperswithcode.com/method/batch-normalization), [ReLU](https://paperswithcode.com/method/relu), and an [inverted bottleneck](https://paperswithcode.com/method/inverted-residual-block) structure. |
Given the following machine learning model name: Dense Contrastive Learning, provide a description of the model | **Dense Contrastive Learning** is a self-supervised learning method for dense prediction tasks. It implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Contrasting with regular contrastive loss, the contrastive loss is computed between the single feature vectors outputted by the global projection head, at the level of global feature, while the dense contrastive loss is computed between the dense feature vectors outputted by the dense projection head, at the level of local feature. |
Given the following machine learning model name: Graph Attention Network, provide a description of the model | A **Graph Attention Network (GAT)** is a neural network architecture that operates on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods’ features, a GAT enables (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront.
See [here](https://docs.dgl.ai/en/0.4.x/tutorials/models/1_gnn/9_gat.html) for an explanation by DGL. |
Given the following machine learning model name: Nonlinear Activation Free Network, provide a description of the model | |
Given the following machine learning model name: FastMoE, provide a description of the model | **FastMoE ** is a distributed MoE training system based on PyTorch with common accelerators. The system provides a hierarchical interface for both flexible model design and adaption to different applications, such as [Transformer-XL](https://paperswithcode.com/method/transformer-xl) and Megatron-LM. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.