uid
int64 4
318k
| paper_url
stringlengths 39
81
| arxiv_id
stringlengths 9
16
⌀ | title
stringlengths 6
365
| abstract
stringlengths 0
7.27k
| url_abs
stringlengths 17
601
| url_pdf
stringlengths 21
819
| proceeding
stringlengths 7
1.03k
⌀ | authors
sequence | tasks
sequence | date
float64 422B
1,672B
⌀ | methods
list | __index_level_0__
int64 1
197k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
130,304 | https://paperswithcode.com/paper/sparse-black-box-video-attack-with | 2001.03754 | Sparse Black-box Video Attack with Reinforcement Learning | Adversarial attacks on video recognition models have been explored recently. However, most existing works treat each video frame equally and ignore their temporal interactions. To overcome this drawback, a few methods try to select some key frames and then perform attacks based on them. Unfortunately, their selection strategy is independent of the attacking step, therefore the resulting performance is limited. Instead, we argue the frame selection phase is closely relevant with the attacking phase. The key frames should be adjusted according to the attacking results. For that, we formulate the black-box video attacks into a Reinforcement Learning (RL) framework. Specifically, the environment in RL is set as the recognition model, and the agent in RL plays the role of frame selecting. By continuously querying the recognition models and receiving the attacking feedback, the agent gradually adjusts its frame selection strategy and adversarial perturbations become smaller and smaller. We conduct a series of experiments with two mainstream video recognition models: C3D and LRCN on the public UCF-101 and HMDB-51 datasets. The results demonstrate that the proposed method can significantly reduce the adversarial perturbations with efficient query times. | https://arxiv.org/abs/2001.03754v3 | https://arxiv.org/pdf/2001.03754v3.pdf | null | [
"Xingxing Wei",
"Huanqian Yan",
"Bo Li"
] | [
"reinforcement-learning",
"Video Recognition"
] | 1,578,700,800,000 | [] | 149,923 |
222,876 | https://paperswithcode.com/paper/meta-learning-for-downstream-aware-and | 2106.03270 | Meta-learning for downstream aware and agnostic pretraining | Neural network pretraining is gaining attention due to its outstanding performance in natural language processing applications. However, pretraining usually leverages predefined task sequences to learn general linguistic clues. The lack of mechanisms in choosing proper tasks during pretraining makes the learning and knowledge encoding inefficient. We thus propose using meta-learning to select tasks that provide the most informative learning signals in each episode of pretraining. With the proposed method, we aim to achieve better efficiency in computation and memory usage for the pretraining process and resulting networks while maintaining the performance. In this preliminary work, we discuss the algorithm of the method and its two variants, downstream-aware and downstream-agnostic pretraining. Our experiment plan is also summarized, while empirical results will be shared in our future works. | https://arxiv.org/abs/2106.03270v1 | https://arxiv.org/pdf/2106.03270v1.pdf | null | [
"Hongyin Luo",
"Shuyan Dong",
"Yung-Sung Chuang",
"Shang-Wen Li"
] | [
"Meta-Learning"
] | 1,622,937,600,000 | [] | 168,561 |
10,565 | https://paperswithcode.com/paper/learning-local-metrics-and-influential | 1802.03452 | Learning Local Metrics and Influential Regions for Classification | The performance of distance-based classifiers heavily depends on the
underlying distance metric, so it is valuable to learn a suitable metric from
the data. To address the problem of multimodality, it is desirable to learn
local metrics. In this short paper, we define a new intuitive distance with
local metrics and influential regions, and subsequently propose a novel local
metric learning method for distance-based classification. Our key intuition is
to partition the metric space into influential regions and a background region,
and then regulate the effectiveness of each local metric to be within the
related influential regions. We learn local metrics and influential regions to
reduce the empirical hinge loss, and regularize the parameters on the basis of
a resultant learning bound. Encouraging experimental results are obtained from
various public and popular data sets. | http://arxiv.org/abs/1802.03452v1 | http://arxiv.org/pdf/1802.03452v1.pdf | null | [
"Mingzhi Dong",
"Yujiang Wang",
"Xiaochen Yang",
"Jing-Hao Xue"
] | [
"Classification",
"Classification",
"Metric Learning"
] | 1,518,134,400,000 | [] | 13,944 |
226,653 | https://paperswithcode.com/paper/on-minimizing-cost-in-legal-document-review | 2106.09866 | On Minimizing Cost in Legal Document Review Workflows | Technology-assisted review (TAR) refers to human-in-the-loop machine learning workflows for document review in legal discovery and other high recall review tasks. Attorneys and legal technologists have debated whether review should be a single iterative process (one-phase TAR workflows) or whether model training and review should be separate (two-phase TAR workflows), with implications for the choice of active learning algorithm. The relative cost of manual labeling for different purposes (training vs. review) and of different documents (positive vs. negative examples) is a key and neglected factor in this debate. Using a novel cost dynamics analysis, we show analytically and empirically that these relative costs strongly impact whether a one-phase or two-phase workflow minimizes cost. We also show how category prevalence, classification task difficulty, and collection size impact the optimal choice not only of workflow type, but of active learning method and stopping point. | https://arxiv.org/abs/2106.09866v1 | https://arxiv.org/pdf/2106.09866v1.pdf | null | [
"Eugene Yang",
"David D. Lewis",
"Ophir Frieder"
] | [
"Active Learning"
] | 1,623,974,400,000 | [] | 145,674 |
275,759 | https://paperswithcode.com/paper/sigma-a-structural-inconsistency-reducing | 2202.02797 | SIGMA: A Structural Inconsistency Reducing Graph Matching Algorithm | Graph matching finds the correspondence of nodes across two correlated graphs and lies at the core of many applications. When graph side information is not available, the node correspondence is estimated on the sole basis of network topologies. In this paper, we propose a novel criterion to measure the graph matching accuracy, structural inconsistency (SI), which is defined based on the network topological structure. Specifically, SI incorporates the heat diffusion wavelet to accommodate the multi-hop structure of the graphs. Based on SI, we propose a Structural Inconsistency reducing Graph Matching Algorithm (SIGMA), which improves the alignment scores of node pairs that have low SI values in each iteration. Under suitable assumptions, SIGMA can reduce SI values of true counterparts. Furthermore, we demonstrate that SIGMA can be derived by using a mirror descent method to solve the Gromov-Wasserstein distance with a novel K-hop-structure-based matching costs. Extensive experiments show that our method outperforms state-of-the-art methods. | https://arxiv.org/abs/2202.02797v1 | https://arxiv.org/pdf/2202.02797v1.pdf | null | [
"Weijie Liu",
"Chao Zhang",
"Nenggan Zheng",
"Hui Qian"
] | [
"Graph Matching"
] | 1,644,105,600,000 | [
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] | 171,920 |
69,079 | https://paperswithcode.com/paper/quantifying-training-challenges-of-dependency | null | Quantifying training challenges of dependency parsers | Not all dependencies are equal when training a dependency parser: some are straightforward enough to be learned with only a sample of data, others embed more complexity. This work introduces a series of metrics to quantify those differences, and thereby to expose the shortcomings of various parsing algorithms and strategies. Apart from a more thorough comparison of parsing systems, these new tools also prove useful for characterizing the information conveyed by cross-lingual parsers, in a quantitative but still interpretable way. | https://aclanthology.org/C18-1270 | https://aclanthology.org/C18-1270.pdf | COLING 2018 8 | [
"Lauriane Aufrant",
"Guillaume Wisniewski",
"Fran{\\c{c}}ois Yvon"
] | [
"Cross-Lingual Transfer",
"Dependency Parsing"
] | 1,533,081,600,000 | [] | 102,018 |
145,877 | https://paperswithcode.com/paper/distributional-semantics-for-neo-latin | null | Distributional Semantics for Neo-Latin | We address the problem of creating and evaluating quality Neo-Latin word embeddings for the purpose of philosophical research, adapting the Nonce2Vec tool to learn embeddings from Neo-Latin sentences. This distributional semantic modeling tool can learn from tiny data incrementally, using a larger background corpus for initialization. We conduct two evaluation tasks: definitional learning of Latin Wikipedia terms, and learning consistent embeddings from 18th century Neo-Latin sentences pertaining to the concept of mathematical method. Our results show that consistent Neo-Latin word embeddings can be learned from this type of data. While our evaluation results are promising, they do not reveal to what extent the learned models match domain expert knowledge of our Neo-Latin texts. Therefore, we propose an additional evaluation method, grounded in expert-annotated data, that would assess whether learned representations are conceptually sound in relation to the domain of study. | https://aclanthology.org/2020.lt4hala-1.13 | https://aclanthology.org/2020.lt4hala-1.13.pdf | LREC 2020 5 | [
"Jelke Bloem",
"Maria Chiara Parisi",
"Martin Reynaert",
"Yvette Oortwijn",
"Arianna Betti"
] | [
"Word Embeddings"
] | 1,588,291,200,000 | [] | 134,617 |
144,930 | https://paperswithcode.com/paper/a-new-validity-index-for-fuzzy-possibilistic | 2005.09162 | A New Validity Index for Fuzzy-Possibilistic C-Means Clustering | In some complicated datasets, due to the presence of noisy data points and outliers, cluster validity indices can give conflicting results in determining the optimal number of clusters. This paper presents a new validity index for fuzzy-possibilistic c-means clustering called Fuzzy-Possibilistic (FP) index, which works well in the presence of clusters that vary in shape and density. Moreover, FPCM like most of the clustering algorithms is susceptible to some initial parameters. In this regard, in addition to the number of clusters, FPCM requires a priori selection of the degree of fuzziness and the degree of typicality. Therefore, we presented an efficient procedure for determining their optimal values. The proposed approach has been evaluated using several synthetic and real-world datasets. Final computational results demonstrate the capabilities and reliability of the proposed approach compared with several well-known fuzzy validity indices in the literature. Furthermore, to clarify the ability of the proposed method in real applications, the proposed method is implemented in microarray gene expression data clustering and medical image segmentation. | https://arxiv.org/abs/2005.09162v1 | https://arxiv.org/pdf/2005.09162v1.pdf | null | [
"Mohammad Hossein Fazel Zarandi",
"Shahabeddin Sotudian",
"Oscar Castillo"
] | [
"Image Segmentation",
"Medical Image Segmentation",
"Semantic Segmentation"
] | 1,589,846,400,000 | [] | 173,006 |
111,802 | https://paperswithcode.com/paper/text-data-augmentation-made-simple-by | 1812.04718 | Text Data Augmentation Made Simple By Leveraging NLP Cloud APIs | In practice, it is common to find oneself with far too little text data to train a deep neural network. This "Big Data Wall" represents a challenge for minority language communities on the Internet, organizations, laboratories and companies that compete the GAFAM (Google, Amazon, Facebook, Apple, Microsoft). While most of the research effort in text data augmentation aims on the long-term goal of finding end-to-end learning solutions, which is equivalent to "using neural networks to feed neural networks", this engineering work focuses on the use of practical, robust, scalable and easy-to-implement data augmentation pre-processing techniques similar to those that are successful in computer vision. Several text augmentation techniques have been experimented. Some existing ones have been tested for comparison purposes such as noise injection or the use of regular expressions. Others are modified or improved techniques like lexical replacement. Finally more innovative ones, such as the generation of paraphrases using back-translation or by the transformation of syntactic trees, are based on robust, scalable, and easy-to-use NLP Cloud APIs. All the text augmentation techniques studied, with an amplification factor of only 5, increased the accuracy of the results in a range of 4.3% to 21.6%, with significant statistical fluctuations, on a standardized task of text polarity prediction. Some standard deep neural network architectures were tested: the multilayer perceptron (MLP), the long short-term memory recurrent network (LSTM) and the bidirectional LSTM (biLSTM). Classical XGBoost algorithm has been tested with up to 2.5% improvements. | https://arxiv.org/abs/1812.04718v1 | https://arxiv.org/pdf/1812.04718v1.pdf | null | [
"Claude Coulombe"
] | [
"Data Augmentation",
"Text Augmentation"
] | 1,543,968,000,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] | 23,888 |
299,827 | https://paperswithcode.com/paper/spatial-temporal-adaptive-graph-convolution | 2206.03128 | Spatial-Temporal Adaptive Graph Convolution with Attention Network for Traffic Forecasting | Traffic forecasting is one canonical example of spatial-temporal learning task in Intelligent Traffic System. Existing approaches capture spatial dependency with a pre-determined matrix in graph convolution neural operators. However, the explicit graph structure losses some hidden representations of relationships among nodes. Furthermore, traditional graph convolution neural operators cannot aggregate long-range nodes on the graph. To overcome these limits, we propose a novel network, Spatial-Temporal Adaptive graph convolution with Attention Network (STAAN) for traffic forecasting. Firstly, we adopt an adaptive dependency matrix instead of using a pre-defined matrix during GCN processing to infer the inter-dependencies among nodes. Secondly, we integrate PW-attention based on graph attention network which is designed for global dependency, and GCN as spatial block. What's more, a stacked dilated 1D convolution, with efficiency in long-term prediction, is adopted in our temporal block for capturing the different time series. We evaluate our STAAN on two real-world datasets, and experiments validate that our model outperforms state-of-the-art baselines. | https://arxiv.org/abs/2206.03128v1 | https://arxiv.org/pdf/2206.03128v1.pdf | null | [
"Chen Weikang",
"Li Yawen",
"Xue Zhe",
"LI ANG",
"Wu Guobin"
] | [
"Graph Attention",
"Time Series"
] | 1,654,560,000,000 | [
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Graph Convolutional Network**, or **GCN**, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of [convolutional neural networks](https://paperswithcode.com/methods/category/convolutional-neural-networks) which operate directly on graphs. The choice of convolutional architecture is motivated via a localized first-order approximation of spectral graph convolutions. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes.",
"full_name": "Graph Convolutional Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "GCN",
"source_title": "Semi-Supervised Classification with Graph Convolutional Networks",
"source_url": "http://arxiv.org/abs/1609.02907v4"
}
] | 147,064 |
143,983 | https://paperswithcode.com/paper/ice-gan-identity-aware-and-capsule-enhanced | 2005.04370 | ICE-GAN: Identity-aware and Capsule-Enhanced GAN with Graph-based Reasoning for Micro-Expression Recognition and Synthesis | Micro-expressions are reflections of people's true feelings and motives, which attract an increasing number of researchers into the study of automatic facial micro-expression recognition. The short detection window, the subtle facial muscle movements, and the limited training samples make micro-expression recognition challenging. To this end, we propose a novel Identity-aware and Capsule-Enhanced Generative Adversarial Network with graph-based reasoning (ICE-GAN), introducing micro-expression synthesis as an auxiliary task to assist recognition. The generator produces synthetic faces with controllable micro-expressions and identity-aware features, whose long-ranged dependencies are captured through the graph reasoning module (GRM), and the discriminator detects the image authenticity and expression classes. Our ICE-GAN was evaluated on Micro-Expression Grand Challenge 2019 (MEGC2019) with a significant improvement (12.9%) over the winner and surpassed other state-of-the-art methods. | https://arxiv.org/abs/2005.04370v2 | https://arxiv.org/pdf/2005.04370v2.pdf | null | [
"Jianhui Yu",
"Chaoyi Zhang",
"Yang song",
"Weidong Cai"
] | [
"Micro-Expression Recognition"
] | 1,588,982,400,000 | [] | 17,625 |
20,766 | https://paperswithcode.com/paper/straight-to-shapes-real-time-detection-of | 1611.07932 | Straight to Shapes: Real-time Detection of Encoded Shapes | Current object detection approaches predict bounding boxes, but these provide
little instance-specific information beyond location, scale and aspect ratio.
In this work, we propose to directly regress to objects' shapes in addition to
their bounding boxes and categories. It is crucial to find an appropriate shape
representation that is compact and decodable, and in which objects can be
compared for higher-order concepts such as view similarity, pose variation and
occlusion. To achieve this, we use a denoising convolutional auto-encoder to
establish an embedding space, and place the decoder after a fast end-to-end
network trained to regress directly to the encoded shape vectors. This yields
what to the best of our knowledge is the first real-time shape prediction
network, running at ~35 FPS on a high-end desktop. With higher-order shape
reasoning well-integrated into the network pipeline, the network shows the
useful practical quality of generalising to unseen categories similar to the
ones in the training set, something that most existing approaches fail to
handle. | http://arxiv.org/abs/1611.07932v2 | http://arxiv.org/pdf/1611.07932v2.pdf | CVPR 2017 7 | [
"Saumya Jetley",
"Michael Sapienza",
"Stuart Golodetz",
"Philip H. S. Torr"
] | [
"Denoising",
"Object Detection",
"Object Detection"
] | 1,479,859,200,000 | [] | 123,804 |
123,474 | https://paperswithcode.com/paper/fixing-implicit-derivatives-trust-region | null | Fixing Implicit Derivatives: Trust-Region Based Learning of Continuous Energy Functions | We present a new technique for the learning of continuous energy functions that
we refer to as Wibergian Learning. One common approach to inverse problems
is to cast them as an energy minimisation problem, where the minimum cost
solution found is used as an estimator of hidden parameters. Our new approach
formally characterises the dependency between weights that control the shape of
the energy function, and the location of minima, by describing minima as fixed
points of optimisation methods. This allows for the use of gradient-based end-to-
end training to integrate deep-learning and the classical inverse problem methods.
We show how our approach can be applied to obtain state-of-the-art results in the
diverse applications of tracker fusion and multiview 3D reconstruction. | http://papers.nips.cc/paper/8427-fixing-implicit-derivatives-trust-region-based-learning-of-continuous-energy-functions | http://papers.nips.cc/paper/8427-fixing-implicit-derivatives-trust-region-based-learning-of-continuous-energy-functions.pdf | NeurIPS 2019 12 | [
"Chris Russell",
"Matteo Toso",
"Neill Campbell"
] | [
"3D Reconstruction"
] | 1,575,158,400,000 | [] | 80,007 |
288,083 | https://paperswithcode.com/paper/segmentation-consistent-probabilistic-lesion | 2204.05276 | Segmentation-Consistent Probabilistic Lesion Counting | Lesion counts are important indicators of disease severity, patient prognosis, and treatment efficacy, yet counting as a task in medical imaging is often overlooked in favor of segmentation. This work introduces a novel continuously differentiable function that maps lesion segmentation predictions to lesion count probability distributions in a consistent manner. The proposed end-to-end approach--which consists of voxel clustering, lesion-level voxel probability aggregation, and Poisson-binomial counting--is non-parametric and thus offers a robust and consistent way to augment lesion segmentation models with post hoc counting capabilities. Experiments on Gadolinium-enhancing lesion counting demonstrate that our method outputs accurate and well-calibrated count distributions that capture meaningful uncertainty information. They also reveal that our model is suitable for multi-task learning of lesion segmentation, is efficient in low data regimes, and is robust to adversarial attacks. | https://arxiv.org/abs/2204.05276v2 | https://arxiv.org/pdf/2204.05276v2.pdf | null | [
"Julien Schroeter",
"Chelsea Myers-Colet",
"Douglas L Arnold",
"Tal Arbel"
] | [
"Lesion Segmentation",
"Multi-Task Learning"
] | 1,649,635,200,000 | [
{
"code_snippet_url": "https://github.com/UCSC-REAL/HOC",
"description": "",
"full_name": "High-Order Consensuses",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Value Function Estimation",
"parent": null
},
"name": "HOC",
"source_title": "Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels",
"source_url": "https://arxiv.org/abs/2102.05291v2"
}
] | 54,129 |
303,367 | https://paperswithcode.com/paper/object-detection-and-tracking-with-autonomous | 2206.12941 | Object Detection and Tracking with Autonomous UAV | In this paper, a combat Unmanned Air Vehicle (UAV) is modeled in the simulation environment. The rotary wing UAV is successfully performed various tasks such as locking on the targets, tracking, and sharing the relevant data with surrounding vehicles. Different software technologies such as API communication, ground control station configuration, autonomous movement algorithms, computer vision, and deep learning are employed. | https://arxiv.org/abs/2206.12941v1 | https://arxiv.org/pdf/2206.12941v1.pdf | null | [
"A. Huzeyfe Demir",
"Berke Yavas",
"Mehmet Yazici",
"Dogukan Aksu",
"M. Ali Aydin"
] | [
"Object Detection",
"Object Detection"
] | 1,656,201,600,000 | [] | 5,527 |
178,653 | https://paperswithcode.com/paper/itreepack-protein-complex-side-chain-packing | 1504.05467 | iTreePack: Protein Complex Side-Chain Packing by Dual Decomposition | Protein side-chain packing is a critical component in obtaining the 3D
coordinates of a structure and drug discovery. Single-domain protein side-chain
packing has been thoroughly studied. A major challenge in generalizing these
methods to protein complexes is that they, unlike monomers, often have very
large treewidth, and thus algorithms such as TreePack cannot be directly
applied. To address this issue, SCWRL4 treats the complex effectively as a
monomer, heuristically excluding weak interactions to decrease treewidth; as a
result, SCWRL4 generates poor packings on protein interfaces. To date, few
side-chain packing methods exist that are specifically designed for protein
complexes. In this paper, we introduce a method, iTreePack, which solves the
side-chain packing problem for complexes by using a novel combination of dual
decomposition and tree decomposition. In particular, iTreePack overcomes the
problem of large treewidth by decomposing a protein complex into smaller
subgraphs and novelly reformulating the complex side-chain packing problem as a
dual relaxation problem; this allows us to solve the side-chain packing of each
small subgraph separately using tree-decomposition. A projected subgradient
algorithm is applied to enforcing the consistency among the side-chain packings
of all the small subgraphs. Computational results demonstrate that our
iTreePack program outperforms SCWRL4 on protein complexes. In particular,
iTreePack places side-chain atoms much more accurately on very large complexes,
which constitute a significant portion of protein-protein interactions.
Moreover, the advantage of iTreePack over SCWRL4 increases with respect to the
treewidth of a complex. Even for monomeric proteins, iTreePack is much more
efficient than SCWRL and slightly more accurate. | http://arxiv.org/abs/1504.05467v1 | http://arxiv.org/pdf/1504.05467v1.pdf | null | [] | [
"Drug Discovery"
] | 1,429,574,400,000 | [] | 44,922 |
221,626 | https://paperswithcode.com/paper/deep-fair-discriminative-clustering | 2105.14146 | Deep Fair Discriminative Clustering | Deep clustering has the potential to learn a strong representation and hence better clustering performance compared to traditional clustering methods such as $k$-means and spectral clustering. However, this strong representation learning ability may make the clustering unfair by discovering surrogates for protected information which we empirically show in our experiments. In this work, we study a general notion of group-level fairness for both binary and multi-state protected status variables (PSVs). We begin by formulating the group-level fairness problem as an integer linear programming formulation whose totally unimodular constraint matrix means it can be efficiently solved via linear programming. We then show how to inject this solver into a discriminative deep clustering backbone and hence propose a refinement learning algorithm to combine the clustering goal with the fairness objective to learn fair clusters adaptively. Experimental results on real-world datasets demonstrate that our model consistently outperforms state-of-the-art fair clustering algorithms. Our framework shows promising results for novel clustering tasks including flexible fairness constraints, multi-state PSVs and predictive clustering. | https://arxiv.org/abs/2105.14146v1 | https://arxiv.org/pdf/2105.14146v1.pdf | null | [
"Hongjing Zhang",
"Ian Davidson"
] | [
"Deep Clustering",
"Fairness",
"Representation Learning"
] | 1,622,160,000,000 | [] | 56,203 |
308,243 | https://paperswithcode.com/paper/fldetector-detecting-malicious-clients-in | 2207.09209 | FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients | Federated learning (FL) is vulnerable to model poisoning attacks, in which malicious clients corrupt the global model via sending manipulated model updates to the server. Existing defenses mainly rely on Byzantine-robust FL methods, which aim to learn an accurate global model even if some clients are malicious. However, they can only resist a small number of malicious clients in practice. It is still an open challenge how to defend against model poisoning attacks with a large number of malicious clients. Our FLDetector addresses this challenge via detecting malicious clients. FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients. Our key observation is that, in model poisoning attacks, the model updates from a client in multiple iterations are inconsistent. Therefore, FLDetector detects malicious clients via checking their model-updates consistency. Roughly speaking, the server predicts a client's model update in each iteration based on its historical model updates using the Cauchy mean value theorem and L-BFGS, and flags a client as malicious if the received model update from the client and the predicted model update are inconsistent in multiple iterations. Our extensive experiments on three benchmark datasets show that FLDetector can accurately detect malicious clients in multiple state-of-the-art model poisoning attacks. After removing the detected malicious clients, existing Byzantine-robust FL methods can learn accurate global models.Our code is available at https://github.com/zaixizhang/FLDetector. | https://arxiv.org/abs/2207.09209v3 | https://arxiv.org/pdf/2207.09209v3.pdf | null | [
"Zaixi Zhang",
"Xiaoyu Cao",
"Jinyuan Jia",
"Neil Zhenqiang Gong"
] | [
"Federated Learning",
"Model Poisoning"
] | 1,658,188,800,000 | [] | 109,629 |
300,587 | https://paperswithcode.com/paper/multi-faceted-graph-attention-network-for | 2206.05168 | Multi-faceted Graph Attention Network for Radar Target Recognition in Heterogeneous Radar Network | Radar target recognition (RTR), as a key technology of intelligent radar systems, has been well investigated. Accurate RTR at low signal-to-noise ratios (SNRs) still remains an open challenge. Most existing methods are based on a single radar or the homogeneous radar network, which do not fully exploit frequency-dimensional information. In this paper, a two-stream semantic feature fusion model, termed Multi-faceted Graph Attention Network (MF-GAT), is proposed to greatly improve the accuracy in the low SNR region of the heterogeneous radar network. By fusing the features extracted from the source domain and transform domain via a graph attention network model, the MF-GAT model distills higher-level semantic features before classification in a unified framework. Extensive experiments are presented to demonstrate that the proposed model can greatly improve the RTR performance at low SNRs. | https://arxiv.org/abs/2206.05168v1 | https://arxiv.org/pdf/2206.05168v1.pdf | null | [
"Han Meng",
"Yuexing Peng",
"Wei Xiang",
"Xu Pang",
"Wenbo Wang"
] | [
"Graph Attention"
] | 1,654,819,200,000 | [] | 108,981 |
10,855 | https://paperswithcode.com/paper/a-method-for-restoring-the-training-set | 1802.01435 | A Method for Restoring the Training Set Distribution in an Image Classifier | Convolutional Neural Networks are a well-known staple of modern image
classification. However, it can be difficult to assess the quality and
robustness of such models. Deep models are known to perform well on a given
training and estimation set, but can easily be fooled by data that is
specifically generated for the purpose. It has been shown that one can produce
an artificial example that does not represent the desired class, but activates
the network in the desired way. This paper describes a new way of
reconstructing a sample from the training set distribution of an image
classifier without deep knowledge about the underlying distribution. This
enables access to the elements of images that most influence the decision of a
convolutional network and to extract meaningful information about the training
distribution. | http://arxiv.org/abs/1802.01435v1 | http://arxiv.org/pdf/1802.01435v1.pdf | null | [
"Alexey Chaplygin",
"Joshua Chacksfield"
] | [
"Classification",
"Image Classification"
] | 1,517,788,800,000 | [] | 160,998 |
72,197 | https://paperswithcode.com/paper/sketch-based-linear-value-function | null | Sketch-Based Linear Value Function Approximation | Hashing is a common method to reduce large, potentially infinite feature vectors to a fixed-size table. In reinforcement learning, hashing is often used in conjunction with tile coding to represent states in continuous spaces. Hashing is also a promising approach to value function approximation in large discrete domains such as Go and Hearts, where feature vectors can be constructed by exhaustively combining a set of atomic features. Unfortunately, the typical use of hashing in value function approximation results in biased value estimates due to the possibility of collisions. Recent work in data stream summaries has led to the development of the tug-of-war sketch, an unbiased estimator for approximating inner products. Our work investigates the application of this new data structure to linear value function approximation. Although in the reinforcement learning setting the use of the tug-of-war sketch leads to biased value estimates, we show that this bias can be orders of magnitude less than that of standard hashing. We provide empirical results on two RL benchmark domains and fifty-five Atari 2600 games to highlight the superior learning performance of tug-of-war hashing. | http://papers.nips.cc/paper/4540-sketch-based-linear-value-function-approximation | http://papers.nips.cc/paper/4540-sketch-based-linear-value-function-approximation.pdf | NeurIPS 2012 12 | [
"Marc Bellemare",
"Joel Veness",
"Michael Bowling"
] | [
"Atari Games",
"reinforcement-learning"
] | 1,354,320,000,000 | [] | 163,640 |
168,173 | https://paperswithcode.com/paper/dipair-fast-and-accurate-distillation-for | 2010.03099 | DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching and Pair Modeling | Pre-trained models like BERT (Devlin et al., 2018) have dominated NLP / IR applications such as single sentence classification, text pair classification, and question answering. However, deploying these models in real systems is highly non-trivial due to their exorbitant computational costs. A common remedy to this is knowledge distillation (Hinton et al., 2015), leading to faster inference. However -- as we show here -- existing works are not optimized for dealing with pairs (or tuples) of texts. Consequently, they are either not scalable or demonstrate subpar performance. In this work, we propose DiPair -- a novel framework for distilling fast and accurate models on text pair tasks. Coupled with an end-to-end training strategy, DiPair is both highly scalable and offers improved quality-speed tradeoffs. Empirical studies conducted on both academic and real-world e-commerce benchmarks demonstrate the efficacy of the proposed approach with speedups of over 350x and minimal quality drop relative to the cross-attention teacher BERT model. | https://arxiv.org/abs/2010.03099v1 | https://arxiv.org/pdf/2010.03099v1.pdf | Findings of the Association for Computational Linguistics 2020 | [
"Jiecao Chen",
"Liu Yang",
"Karthik Raman",
"Michael Bendersky",
"Jung-Jung Yeh",
"Yun Zhou",
"Marc Najork",
"Danyang Cai",
"Ehsan Emadzadeh"
] | [
"Knowledge Distillation",
"Question Answering",
"Sentence Classification",
"Text Matching"
] | 1,602,028,800,000 | [
{
"code_snippet_url": null,
"description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)",
"full_name": "Knowledge Distillation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Knowledge Distillation",
"parent": null
},
"name": "Knowledge Distillation",
"source_title": "Distilling the Knowledge in a Neural Network",
"source_url": "http://arxiv.org/abs/1503.02531v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "",
"description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)",
"full_name": "WordPiece",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "WordPiece",
"source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"source_url": "http://arxiv.org/abs/1609.08144v2"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584",
"description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.",
"full_name": "Gaussian Error Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GELU",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v4"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
}
] | 115,131 |
34,958 | https://paperswithcode.com/paper/detecting-engagement-in-egocentric-video | 1604.00906 | Detecting Engagement in Egocentric Video | In a wearable camera video, we see what the camera wearer sees. While this
makes it easy to know roughly what he chose to look at, it does not immediately
reveal when he was engaged with the environment. Specifically, at what moments
did his focus linger, as he paused to gather more information about something
he saw? Knowing this answer would benefit various applications in video
summarization and augmented reality, yet prior work focuses solely on the
"what" question (estimating saliency, gaze) without considering the "when"
(engagement). We propose a learning-based approach that uses long-term
egomotion cues to detect engagement, specifically in browsing scenarios where
one frequently takes in new visual information (e.g., shopping, touring). We
introduce a large, richly annotated dataset for ego-engagement that is the
first of its kind. Our approach outperforms a wide array of existing methods.
We show engagement can be detected well independent of both scene appearance
and the camera wearer's identity. | http://arxiv.org/abs/1604.00906v1 | http://arxiv.org/pdf/1604.00906v1.pdf | null | [
"Yu-Chuan Su",
"Kristen Grauman"
] | [
"Video Summarization"
] | 1,459,728,000,000 | [] | 95,999 |
16,944 | https://paperswithcode.com/paper/eden-evolutionary-deep-networks-for-efficient | 1709.09161 | EDEN: Evolutionary Deep Networks for Efficient Machine Learning | Deep neural networks continue to show improved performance with increasing
depth, an encouraging trend that implies an explosion in the possible
permutations of network architectures and hyperparameters for which there is
little intuitive guidance. To address this increasing complexity, we propose
Evolutionary DEep Networks (EDEN), a computationally efficient
neuro-evolutionary algorithm which interfaces to any deep neural network
platform, such as TensorFlow. We show that EDEN evolves simple yet successful
architectures built from embedding, 1D and 2D convolutional, max pooling and
fully connected layers along with their hyperparameters. Evaluation of EDEN
across seven image and sentiment classification datasets shows that it reliably
finds good networks -- and in three cases achieves state-of-the-art results --
even on a single GPU, in just 6-24 hours. Our study provides a first attempt at
applying neuro-evolution to the creation of 1D convolutional networks for
sentiment analysis including the optimisation of the embedding layer. | http://arxiv.org/abs/1709.09161v1 | http://arxiv.org/pdf/1709.09161v1.pdf | null | [
"Emmanuel Dufourq",
"Bruce A. Bassett"
] | [
"Sentiment Analysis"
] | 1,506,384,000,000 | [
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
}
] | 61,919 |
219,256 | https://paperswithcode.com/paper/pay-attention-to-mlps | 2105.08050 | Pay Attention to MLPs | Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple network architecture, gMLP, based on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream NLP tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute. | https://arxiv.org/abs/2105.08050v2 | https://arxiv.org/pdf/2105.08050v2.pdf | NeurIPS 2021 12 | [
"Hanxiao Liu",
"Zihang Dai",
"David R. So",
"Quoc V. Le"
] | [
"Image Classification",
"Natural Language Inference",
"Question Answering",
"Sentiment Analysis"
] | 1,621,209,600,000 | [
{
"code_snippet_url": "",
"description": "**Spatial Gating Unit**, or **SGU**, is a gating unit used in the [gMLP](https://paperswithcode.com/method/gmlp) architecture to captures spatial interactions. To enable cross-token interactions, it is necessary for the layer $s(\\cdot)$ to contain a contraction operation over the spatial dimension. The layer $s(\\cdot)$ is formulated as the output of linear gating:\r\n\r\n$$\r\ns(Z)=Z \\odot f\\_{W, b}(Z)\r\n$$\r\n\r\nwhere $\\odot$ denotes element-wise multiplication. For training stability, the authors find it critical to initialize $W$ as near-zero values and $b$ as ones, meaning that $f\\_{W, b}(Z) \\approx 1$ and therefore $s(Z) \\approx Z$ at the beginning of training. This initialization ensures each [gMLP](https://paperswithcode.com/method/gmlp) block behaves like a regular [FFN](https://paperswithcode.com/method/gmlp) at the early stage of training, where each token is processed independently, and only gradually injects spatial information across tokens during the course of learning.\r\n\r\nThe authors find it further effective to split $Z$ into two independent parts $\\left(Z\\_{1}, Z\\_{2}\\right)$ along the channel dimension for the gating function and for the multiplicative bypass:\r\n\r\n$$\r\ns(Z)=Z\\_{1} \\odot f\\_{W, b}\\left(Z\\_{2}\\right)\r\n$$\r\n\r\nThey also normalize the input to $f\\_{W, b}$ which empirically improved the stability of large NLP models.",
"full_name": "Spatial Gating Unit",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Spatial Gating Unit",
"source_title": "Pay Attention to MLPs",
"source_url": "https://arxiv.org/abs/2105.08050v2"
},
{
"code_snippet_url": "",
"description": "**gMLP** is an [MLP](https://paperswithcode.com/methods/category/feedforward-networks)-based alternative to [Transformers](https://paperswithcode.com/methods/category/vision-transformer) without [self-attention](https://paperswithcode.com/method/scaled), which simply consists of channel projections and spatial projections with static parameterization. It is built out of basic MLP layers with gating. The model consists of a stack of $L$ blocks with identical size and structure. Let $X \\in \\mathbb{R}^{n \\times d}$ be the token representations with sequence length $n$ and dimension $d$. Each block is defined as:\r\n\r\n$$\r\nZ=\\sigma(X U), \\quad \\tilde{Z}=s(Z), \\quad Y=\\tilde{Z} V\r\n$$\r\n\r\nwhere $\\sigma$ is an activation function such as [GeLU](https://paperswithcode.com/method/gelu). $U$ and $V$ define linear projections along the channel dimension - the same as those in the FFNs of Transformers (e.g., their shapes are $768 \\times 3072$ and $3072 \\times 768$ for $\\text{BERT}_{\\text {base }}$).\r\n\r\nA key ingredient is $s(\\cdot)$, a layer which captures spatial interactions. When $s$ is an identity mapping, the above transformation degenerates to a regular FFN, where individual tokens are processed independently without any cross-token communication. One of the major focuses is therefore to design a good $s$ capable of capturing complex spatial interactions across tokens. This leads to the use of a [Spatial Gating Unit](https://www.paperswithcode.com/method/spatial-gating-unit) which involves a modified linear gating.\r\n\r\nThe overall block layout is inspired by [inverted bottlenecks](https://paperswithcode.com/method/inverted-residual-block), which define $s(\\cdot)$ as a [spatial depthwise convolution](https://paperswithcode.com/method/depthwise-separable-convolution). Note, unlike Transformers, gMLP does not require position embeddings because such information will be captured in $s(\\cdot)$.",
"full_name": "gMLP",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "gMLP",
"source_title": "Pay Attention to MLPs",
"source_url": "https://arxiv.org/abs/2105.08050v2"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)",
"full_name": "WordPiece",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "WordPiece",
"source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"source_url": "http://arxiv.org/abs/1609.08144v2"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584",
"description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.",
"full_name": "Gaussian Error Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GELU",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v4"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
}
] | 69,185 |
248,425 | https://paperswithcode.com/paper/exact-and-bounded-collision-probability-for | 2110.06348 | Exact and Bounded Collision Probability for Motion Planning under Gaussian Uncertainty | Computing collision-free trajectories is of prime importance for safe navigation. We present an approach for computing the collision probability under Gaussian distributed motion and sensing uncertainty with the robot and static obstacle shapes approximated as ellipsoids. The collision condition is formulated as the distance between ellipsoids and unlike previous approaches we provide a method for computing the exact collision probability. Furthermore, we provide a tight upper bound that can be computed much faster during online planning. Comparison to other state-of-the-art methods is also provided. The proposed method is evaluated in simulation under varying configuration and number of obstacles. | https://arxiv.org/abs/2110.06348v1 | https://arxiv.org/pdf/2110.06348v1.pdf | null | [
"Antony Thomas",
"Fulvio Mastrogiovanni",
"Marco Baglietto"
] | [
"Motion Planning"
] | 1,633,996,800,000 | [] | 84,651 |
267,257 | https://paperswithcode.com/paper/kge-cl-contrastive-learning-of-knowledge | 2112.04871 | KGE-CL: Contrastive Learning of Knowledge Graph Embeddings | Learning the embeddings of knowledge graphs is vital in artificial intelligence, and can benefit various downstream applications, such as recommendation and question answering. In recent years, many research efforts have been proposed for knowledge graph embedding. However, most previous knowledge graph embedding methods ignore the semantic similarity between the related entities and entity-relation couples in different triples since they separately optimize each triple with the scoring function. To address this problem, we propose a simple yet efficient contrastive learning framework for knowledge graph embeddings, which can shorten the semantic distance of the related entities and entity-relation couples in different triples and thus improve the expressiveness of knowledge graph embeddings. We evaluate our proposed method on three standard knowledge graph benchmarks. It is noteworthy that our method can yield some new state-of-the-art results, achieving 51.2% MRR, 46.8% Hits@1 on the WN18RR dataset, and 59.1% MRR, 51.8% Hits@1 on the YAGO3-10 dataset. | https://arxiv.org/abs/2112.04871v1 | https://arxiv.org/pdf/2112.04871v1.pdf | null | [
"Wentao Xu",
"Zhiping Luo",
"Weiqing Liu",
"Jiang Bian",
"Jian Yin",
"Tie-Yan Liu"
] | [
"Contrastive Learning",
"Graph Embedding",
"Knowledge Graph Embedding",
"Knowledge Graph Embeddings",
"Knowledge Graphs",
"Question Answering",
"Semantic Similarity",
"Semantic Textual Similarity"
] | 1,639,008,000,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] | 100,793 |
223,005 | https://paperswithcode.com/paper/oriented-object-detection-with-transformer | 2106.03146 | Oriented Object Detection with Transformer | Object detection with Transformers (DETR) has achieved a competitive performance over traditional detectors, such as Faster R-CNN. However, the potential of DETR remains largely unexplored for the more challenging task of arbitrary-oriented object detection problem. We provide the first attempt and implement Oriented Object DEtection with TRansformer ($\bf O^2DETR$) based on an end-to-end network. The contributions of $\rm O^2DETR$ include: 1) we provide a new insight into oriented object detection, by applying Transformer to directly and efficiently localize objects without a tedious process of rotated anchors as in conventional detectors; 2) we design a simple but highly efficient encoder for Transformer by replacing the attention mechanism with depthwise separable convolution, which can significantly reduce the memory and computational cost of using multi-scale features in the original Transformer; 3) our $\rm O^2DETR$ can be another new benchmark in the field of oriented object detection, which achieves up to 3.85 mAP improvement over Faster R-CNN and RetinaNet. We simply fine-tune the head mounted on $\rm O^2DETR$ in a cascaded architecture and achieve a competitive performance over SOTA in the DOTA dataset. | https://arxiv.org/abs/2106.03146v1 | https://arxiv.org/pdf/2106.03146v1.pdf | null | [
"Teli Ma",
"Mingyuan Mao",
"Honghui Zheng",
"Peng Gao",
"Xiaodi Wang",
"Shumin Han",
"Errui Ding",
"Baochang Zhang",
"David Doermann"
] | [
"Object Detection",
"Object Detection"
] | 1,622,937,600,000 | [
{
"code_snippet_url": "https://github.com/facebookresearch/Detectron/blob/8170b25b425967f8f1c7d715bea3c5b8d9536cd8/detectron/modeling/FPN.py#L117",
"description": "A **Feature Pyramid Network**, or **FPN**, is a feature extractor that takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps at multiple levels, in a fully convolutional fashion. This process is independent of the backbone convolutional architectures. It therefore acts as a generic solution for building feature pyramids inside deep convolutional networks to be used in tasks like object detection.\r\n\r\nThe construction of the pyramid involves a bottom-up pathway and a top-down pathway.\r\n\r\nThe bottom-up pathway is the feedforward computation of the backbone ConvNet, which computes a feature hierarchy consisting of feature maps at several scales with a scaling step of 2. For the feature\r\npyramid, one pyramid level is defined for each stage. The output of the last layer of each stage is used as a reference set of feature maps. For [ResNets](https://paperswithcode.com/method/resnet) we use the feature activations output by each stage’s last [residual block](https://paperswithcode.com/method/residual-block). \r\n\r\nThe top-down pathway hallucinates higher resolution features by upsampling spatially coarser, but semantically stronger, feature maps from higher pyramid levels. These features are then enhanced with features from the bottom-up pathway via lateral connections. Each lateral connection merges feature maps of the same spatial size from the bottom-up pathway and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are more accurately localized as it was subsampled fewer times.",
"full_name": "Feature Pyramid Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Feature Extractors** for object detection are modules used to construct features that can be used for detecting objects. They address issues such as the need to detect multiple-sized objects in an image (and the need to have representations that are suitable for the different scales).",
"name": "Feature Extractors",
"parent": null
},
"name": "FPN",
"source_title": "Feature Pyramid Networks for Object Detection",
"source_url": "http://arxiv.org/abs/1612.03144v2"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://www.healthnutra.org/es/maxup/",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10",
"description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)",
"full_name": "RoIPool",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIPool",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
},
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/clcarwin/focal_loss_pytorch/blob/e11e75bad957aecf641db6998a1016204722c1bb/focalloss.py#L6",
"description": "A **Focal Loss** function addresses class imbalance during training in tasks like object detection. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard misclassified examples. It is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. \r\n\r\nFormally, the Focal Loss adds a factor $(1 - p\\_{t})^\\gamma$ to the standard cross entropy criterion. Setting $\\gamma>0$ reduces the relative loss for well-classified examples ($p\\_{t}>.5$), putting more focus on hard, misclassified examples. Here there is tunable *focusing* parameter $\\gamma \\ge 0$. \r\n\r\n$$ {\\text{FL}(p\\_{t}) = - (1 - p\\_{t})^\\gamma \\log\\left(p\\_{t}\\right)} $$",
"full_name": "Focal Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Focal Loss",
"source_title": "Focal Loss for Dense Object Detection",
"source_url": "http://arxiv.org/abs/1708.02002v2"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22",
"description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.",
"full_name": "Faster R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Faster R-CNN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": "https://github.com/facebookresearch/Detectron/blob/8170b25b425967f8f1c7d715bea3c5b8d9536cd8/detectron/modeling/retinanet_heads.py",
"description": "**RetinaNet** is a one-stage object detection model that utilizes a [focal loss](https://paperswithcode.com/method/focal-loss) function to address class imbalance during training. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. RetinaNet is a single, unified network composed of a *backbone* network and two task-specific *subnetworks*. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-self convolutional network. The first subnet performs convolutional object classification on the backbone's output; the second subnet performs convolutional bounding box regression. The two subnetworks feature a simple design that the authors propose specifically for one-stage, dense detection. \r\n\r\nWe can see the motivation for focal loss by comparing with two-stage object detectors. Here class imbalance is addressed by a two-stage cascade and sampling heuristics. The proposal stage (e.g., [Selective Search](https://paperswithcode.com/method/selective-search), [EdgeBoxes](https://paperswithcode.com/method/edgeboxes), [DeepMask](https://paperswithcode.com/method/deepmask), [RPN](https://paperswithcode.com/method/rpn)) rapidly narrows down the number of candidate object locations to a small number (e.g., 1-2k), filtering out most background samples. In the second classification stage, sampling heuristics, such as a fixed foreground-to-background ratio, or online hard example mining ([OHEM](https://paperswithcode.com/method/ohem)), are performed to maintain a\r\nmanageable balance between foreground and background.\r\n\r\nIn contrast, a one-stage detector must process a much larger set of candidate object locations regularly sampled across an image. To tackle this, RetinaNet uses a focal loss function, a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. \r\n\r\nFormally, the Focal Loss adds a factor $(1 - p\\_{t})^\\gamma$ to the standard cross entropy criterion. Setting $\\gamma>0$ reduces the relative loss for well-classified examples ($p\\_{t}>.5$), putting more focus on hard, misclassified examples. Here there is tunable *focusing* parameter $\\gamma \\ge 0$. \r\n\r\n$$ {\\text{FL}(p\\_{t}) = - (1 - p\\_{t})^\\gamma \\log\\left(p\\_{t}\\right)} $$",
"full_name": "RetinaNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "RetinaNet",
"source_title": "Focal Loss for Dense Object Detection",
"source_url": "http://arxiv.org/abs/1708.02002v2"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "A **Feedforward Network**, or a **Multilayer Perceptron (MLP)**, is a neural network with solely densely connected layers. This is the classic neural network architecture of the literature. It consists of inputs $x$ passed through units $h$ (of which there can be many layers) to predict a target $y$. Activation functions are generally chosen to be non-linear to allow for flexible functional approximation.\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Feedforward Network",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Feedforward Network",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Detr**, or **Detection Transformer**, is a set-based object detector using a [Transformer](https://paperswithcode.com/method/transformer) on top of a convolutional backbone. It uses a conventional CNN backbone to learn a 2D representation of an input image. The model flattens it and supplements it with a positional encoding before passing it into a transformer encoder. A transformer decoder then takes as input a small fixed number of learned positional embeddings, which we call object queries, and additionally attends to the encoder output. We pass each output embedding of the decoder to a shared feed forward network (FFN) that predicts either a detection (class\r\nand bounding box) or a “no object” class.",
"full_name": "Detection Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Detr",
"source_title": "End-to-End Object Detection with Transformers",
"source_url": "https://arxiv.org/abs/2005.12872v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
}
] | 19,912 |
274,499 | https://paperswithcode.com/paper/tensor-recovery-based-on-tensor-equivalent | 2201.12709 | Low-Rank Tensor Completion Based on Bivariate Equivalent Minimax-Concave Penalty | Low-rank tensor completion (LRTC) is an important problem in computer vision and machine learning. The minimax-concave penalty (MCP) function as a non-convex relaxation has achieved good results in the LRTC problem. To makes all the constant parameters of the MCP function as variables so that futherly improving the adaptability to the change of singular values in the LRTC problem, we propose the bivariate equivalent minimax-concave penalty (BEMCP) theorem. Applying the BEMCP theorem to tensor singular values leads to the bivariate equivalent weighted tensor $\Gamma$-norm (BEWTGN) theorem, and we analyze and discuss its corresponding properties. Besides, to facilitate the solution of the LRTC problem, we give the proximal operators of the BEMCP theorem and BEWTGN. Meanwhile, we propose a BEMCP model for the LRTC problem, which is optimally solved based on alternating direction multiplier (ADMM). Finally, the proposed method is applied to the data restorations of multispectral image (MSI), magnetic resonance imaging (MRI) and color video (CV) in real-world, and the experimental results demonstrate that it outperforms the state-of-arts methods. | https://arxiv.org/abs/2201.12709v3 | https://arxiv.org/pdf/2201.12709v3.pdf | null | [
"HongBing Zhang",
"Xinyi Liu",
"HongTao Fan",
"YaJing Li",
"Yinlin Ye"
] | [
"Denoising"
] | 1,643,500,800,000 | [] | 88,475 |
120,295 | https://paperswithcode.com/paper/direct-estimation-of-differential-functional | 1910.09701 | Direct Estimation of Differential Functional Graphical Models | We consider the problem of estimating the difference between two functional undirected graphical models with shared structures. In many applications, data are naturally regarded as high-dimensional random function vectors rather than multivariate scalars. For example, electroencephalography (EEG) data are more appropriately treated as functions of time. In these problems, not only can the number of functions measured per sample be large, but each function is itself an infinite dimensional object, making estimation of model parameters challenging. We develop a method that directly estimates the difference of graphs, avoiding separate estimation of each graph, and show it is consistent in certain high-dimensional settings. We illustrate finite sample properties of our method through simulation studies. Finally, we apply our method to EEG data to uncover differences in functional brain connectivity between alcoholics and control subjects. | https://arxiv.org/abs/1910.09701v2 | https://arxiv.org/pdf/1910.09701v2.pdf | NeurIPS 2019 12 | [
"Boxin Zhao",
"Y. Samuel Wang",
"Mladen Kolar"
] | [
"EEG"
] | 1,571,702,400,000 | [] | 151,189 |
151,751 | https://paperswithcode.com/paper/the-unimelb-submission-to-the-sigmorphon-2020 | null | The UniMelb Submission to the SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection | The paper describes the University of Melbourne{'}s submission to the SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection. Our team submitted three systems in total, two neural and one non-neural. Our analysis of systems{'} performance shows positive effects of newly introduced data hallucination technique that we employed in one of neural systems, especially in low-resource scenarios. A non-neural system based on observed inflection patterns shows optimistic results even in its simple implementation ({\textgreater}75{\%} accuracy for 50{\%} of languages). With possible improvement within the same modeling principle, accuracy might grow to values above 90{\%}. | https://aclanthology.org/2020.sigmorphon-1.20 | https://aclanthology.org/2020.sigmorphon-1.20.pdf | WS 2020 7 | [
"Andreas Scherbakov"
] | [
"Morphological Inflection"
] | 1,593,561,600,000 | [] | 83,497 |
125,390 | https://paperswithcode.com/paper/controllable-list-wise-ranking-for-universal | 1911.10566 | Controllable List-wise Ranking for Universal No-reference Image Quality Assessment | No-reference image quality assessment (NR-IQA) has received increasing attention in the IQA community since reference image is not always available. Real-world images generally suffer from various types of distortion. Unfortunately, existing NR-IQA methods do not work with all types of distortion. It is a challenging task to develop universal NR-IQA that has the ability of evaluating all types of distorted images. In this paper, we propose a universal NR-IQA method based on controllable list-wise ranking (CLRIQA). First, to extend the authentically distorted image dataset, we present an imaging-heuristic approach, in which the over-underexposure is formulated as an inverse of Weber-Fechner law, and fusion strategy and probabilistic compression are adopted, to generate the degraded real-world images. These degraded images are label-free yet associated with quality ranking information. We then design a controllable list-wise ranking function by limiting rank range and introducing an adaptive margin to tune rank interval. Finally, the extended dataset and controllable list-wise ranking function are used to pre-train a CNN. Moreover, in order to obtain an accurate prediction model, we take advantage of the original dataset to further fine-tune the pre-trained network. Experiments evaluated on four benchmark datasets (i.e. LIVE, CSIQ, TID2013, and LIVE-C) show that the proposed CLRIQA improves the state of the art by over 9% in terms of overall performance. The code and model are publicly available at https://github.com/GZHU-Image-Lab/CLRIQA. | https://arxiv.org/abs/1911.10566v2 | https://arxiv.org/pdf/1911.10566v2.pdf | null | [
"Fu-Zhao Ou",
"Yuan-Gen Wang",
"Jin Li",
"Guopu Zhu",
"Sam Kwong"
] | [
"Image Quality Assessment",
"No-Reference Image Quality Assessment"
] | 1,574,553,600,000 | [] | 45,377 |
51,226 | https://paperswithcode.com/paper/cail2018-a-large-scale-legal-dataset-for | 1807.02478 | CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction | In this paper, we introduce the \textbf{C}hinese \textbf{AI} and \textbf{L}aw
challenge dataset (CAIL2018), the first large-scale Chinese legal dataset for
judgment prediction. \dataset contains more than $2.6$ million criminal cases
published by the Supreme People's Court of China, which are several times
larger than other datasets in existing works on judgment prediction. Moreover,
the annotations of judgment results are more detailed and rich. It consists of
applicable law articles, charges, and prison terms, which are expected to be
inferred according to the fact descriptions of cases. For comparison, we
implement several conventional text classification baselines for judgment
prediction and experimental results show that it is still a challenge for
current models to predict the judgment results of legal cases, especially on
prison terms. To help the researchers make improvements on legal judgment
prediction, both \dataset and baselines will be released after the CAIL
competition\footnote{http://cail.cipsc.org.cn/}. | http://arxiv.org/abs/1807.02478v1 | http://arxiv.org/pdf/1807.02478v1.pdf | null | [
"Chaojun Xiao",
"Haoxi Zhong",
"Zhipeng Guo",
"Cunchao Tu",
"Zhiyuan Liu",
"Maosong Sun",
"Yansong Feng",
"Xianpei Han",
"Zhen Hu",
"Heng Wang",
"Jianfeng Xu"
] | [
"Text Classification",
"Text Classification"
] | 1,530,662,400,000 | [] | 131,560 |
133,791 | https://paperswithcode.com/paper/analytic-marching-an-analytic-meshing | 2002.06597 | Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks | This paper studies a problem of learning surface mesh via implicit functions in an emerging field of deep learning surface reconstruction, where implicit functions are popularly implemented as multi-layer perceptrons (MLPs) with rectified linear units (ReLU). To achieve meshing from learned implicit functions, existing methods adopt the de-facto standard algorithm of marching cubes; while promising, they suffer from loss of precision learned in the MLPs, due to the discretization nature of marching cubes. Motivated by the knowledge that a ReLU based MLP partitions its input space into a number of linear regions, we identify from these regions analytic cells and analytic faces that are associated with zero-level isosurface of the implicit function, and characterize the theoretical conditions under which the identified analytic faces are guaranteed to connect and form a closed, piecewise planar surface. Based on our theorem, we propose a naturally parallelizable algorithm of analytic marching, which marches among analytic cells to exactly recover the mesh captured by a learned MLP. Experiments on deep learning mesh reconstruction verify the advantages of our algorithm over existing ones. | https://arxiv.org/abs/2002.06597v1 | https://arxiv.org/pdf/2002.06597v1.pdf | ICML 2020 1 | [
"Jiabao Lei",
"Kui Jia"
] | [
"Surface Reconstruction"
] | 1,581,811,200,000 | [
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] | 168,938 |
706 | https://paperswithcode.com/paper/embedding-text-in-hyperbolic-spaces | 1806.04313 | Embedding Text in Hyperbolic Spaces | Natural language text exhibits hierarchical structure in a variety of
respects. Ideally, we could incorporate our prior knowledge of this
hierarchical structure into unsupervised learning algorithms that work on text
data. Recent work by Nickel & Kiela (2017) proposed using hyperbolic instead of
Euclidean embedding spaces to represent hierarchical data and demonstrated
encouraging results when embedding graphs. In this work, we extend their method
with a re-parameterization technique that allows us to learn hyperbolic
embeddings of arbitrarily parameterized objects. We apply this framework to
learn word and sentence embeddings in hyperbolic space in an unsupervised
manner from text corpora. The resulting embeddings seem to encode certain
intuitive notions of hierarchy, such as word-context frequency and phrase
constituency. However, the implicit continuous hierarchy in the learned
hyperbolic space makes interrogating the model's learned hierarchies more
difficult than for models that learn explicit edges between items. The learned
hyperbolic embeddings show improvements over Euclidean embeddings in some --
but not all -- downstream tasks, suggesting that hierarchical organization is
more useful for some tasks than others. | http://arxiv.org/abs/1806.04313v1 | http://arxiv.org/pdf/1806.04313v1.pdf | WS 2018 6 | [
"Bhuwan Dhingra",
"Christopher J. Shallue",
"Mohammad Norouzi",
"Andrew M. Dai",
"George E. Dahl"
] | [
"Sentence Embedding"
] | 1,528,761,600,000 | [] | 15,895 |
282,961 | https://paperswithcode.com/paper/coda-a-real-world-road-corner-case-dataset | 2203.07724 | CODA: A Real-World Road Corner Case Dataset for Object Detection in Autonomous Driving | Contemporary deep-learning object detection methods for autonomous driving usually assume prefixed categories of common traffic participants, such as pedestrians and cars. Most existing detectors are unable to detect uncommon objects and corner cases (e.g., a dog crossing a street), which may lead to severe accidents in some situations, making the timeline for the real-world application of reliable autonomous driving uncertain. One main reason that impedes the development of truly reliably self-driving systems is the lack of public datasets for evaluating the performance of object detectors on corner cases. Hence, we introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors. The dataset consists of 1500 carefully selected real-world driving scenes, each containing four object-level corner cases (on average), spanning more than 30 object categories. On CODA, the performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR. Moreover, we experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA, suggesting that a robust perception system for autonomous driving is probably still far from reach. We expect our CODA dataset to facilitate further research in reliable detection for real-world autonomous driving. Our dataset will be released at https://coda-dataset.github.io. | https://arxiv.org/abs/2203.07724v3 | https://arxiv.org/pdf/2203.07724v3.pdf | null | [
"Kaican Li",
"Kai Chen",
"Haoyu Wang",
"Lanqing Hong",
"Chaoqiang Ye",
"Jianhua Han",
"Yukuai Chen",
"Wei zhang",
"Chunjing Xu",
"Dit-yan Yeung",
"Xiaodan Liang",
"Zhenguo Li",
"Hang Xu"
] | [
"Autonomous Driving",
"Object Detection",
"Object Detection"
] | 1,647,302,400,000 | [] | 139,101 |
125,295 | https://paperswithcode.com/paper/architectural-configurations-atlas | 1911.11024 | Architectural configurations, atlas granularity and functional connectivity with diagnostic value in Autism Spectrum Disorder | Currently, the diagnosis of Autism Spectrum Disorder (ASD) is dependent upon a subjective, time-consuming evaluation of behavioral tests by an expert clinician. Non-invasive functional MRI (fMRI) characterizes brain connectivity and may be used to inform diagnoses and democratize medicine. However, successful construction of deep learning models from fMRI requires addressing key choices about the model's architecture, including the number of layers and number of neurons per layer. Meanwhile, deriving functional connectivity (FC) features from fMRI requires choosing an atlas with an appropriate level of granularity. Once a model has been built, it is vital to determine which features are predictive of ASD and if similar features are learned across atlas granularity levels. To identify aptly suited architectural configurations, probability distributions of the configurations of high versus low performing models are compared. To determine the effect of atlas granularity, connectivity features are derived from atlases with 3 levels of granularity and important features are ranked with permutation feature importance. Results show the highest performing models use between 2-4 hidden layers and 16-64 neurons per layer, granularity dependent. Connectivity features identified as important across all 3 atlas granularity levels include FC to the supplementary motor gyrus and language association cortex, regions associated with deficits in social and sensory processing in ASD. Importantly, the cerebellum, often not included in functional analyses, is also identified as a region whose abnormal connectivity is highly predictive of ASD. Results of this study identify important regions to include in future studies of ASD, help assist in the selection of network architectures, and help identify appropriate levels of granularity to facilitate the development of accurate diagnostic models of ASD. | https://arxiv.org/abs/1911.11024v2 | https://arxiv.org/pdf/1911.11024v2.pdf | null | [
"Cooper J. Mellema",
"Alex Treacher",
"Kevin P. Nguyen",
"Albert Montillo"
] | [
"Feature Importance"
] | 1,574,640,000,000 | [] | 9,620 |
80,759 | https://paperswithcode.com/paper/privacy-preserving-off-policy-evaluation | 1902.00174 | Privacy Preserving Off-Policy Evaluation | Many reinforcement learning applications involve the use of data that is
sensitive, such as medical records of patients or financial information.
However, most current reinforcement learning methods can leak information
contained within the (possibly sensitive) data on which they are trained. To
address this problem, we present the first differentially private approach for
off-policy evaluation. We provide a theoretical analysis of the
privacy-preserving properties of our algorithm and analyze its utility (speed
of convergence). After describing some results of this theoretical analysis, we
show empirically that our method outperforms previous methods (which are
restricted to the on-policy setting). | http://arxiv.org/abs/1902.00174v1 | http://arxiv.org/pdf/1902.00174v1.pdf | null | [
"Tengyang Xie",
"Philip S. Thomas",
"Gerome Miklau"
] | [
"Privacy Preserving",
"reinforcement-learning"
] | 1,548,979,200,000 | [] | 181,357 |
14,000 | https://paperswithcode.com/paper/time-contrastive-learning-based-dnn | 1704.02373 | Time-Contrastive Learning Based DNN Bottleneck Features for Text-Dependent Speaker Verification | In this paper, we present a time-contrastive learning (TCL) based bottleneck (BN)feature extraction method for speech signals with an application to text-dependent (TD) speaker verification (SV). It is well-known that speech signals exhibit quasi-stationary behavior in and only in a short interval, and the TCL method aims to exploit this temporal structure. More specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting speech signals, in contrast to existing DNN based BN feature extraction methods that train DNNs using labeled data to discriminate speakers or pass-phrases or phones or a combination of them. In the context of speaker verification, speech data of fixed pass-phrases are used for TCL-BN training, while the pass-phrases used for TCL-BN training are excluded from being used for SV, so that the learned features can be considered generic. The method is evaluated on the RedDots Challenge 2016 database. Experimental results show that TCL-BN is superior to the existing speaker and pass-phrase discriminant BN features and the Mel-frequency cepstral coefficient feature for text-dependent speaker verification. | https://arxiv.org/abs/1704.02373v3 | https://arxiv.org/pdf/1704.02373v3.pdf | null | [
"Achintya Kr. Sarkar",
"Zheng-Hua Tan"
] | [
"Contrastive Learning",
"Speaker Verification",
"Text-Dependent Speaker Verification"
] | 1,491,436,800,000 | [] | 158,838 |
139,747 | https://paperswithcode.com/paper/speaker-change-aware-crf-for-dialogue-act | 2004.02913 | Speaker-change Aware CRF for Dialogue Act Classification | Recent work in Dialogue Act (DA) classification approaches the task as a sequence labeling problem, using neural network models coupled with a Conditional Random Field (CRF) as the last layer. CRF models the conditional probability of the target DA label sequence given the input utterance sequence. However, the task involves another important input sequence, that of speakers, which is ignored by previous work. To address this limitation, this paper proposes a simple modification of the CRF layer that takes speaker-change into account. Experiments on the SwDA corpus show that our modified CRF layer outperforms the original one, with very wide margins for some DA labels. Further, visualizations demonstrate that our CRF layer can learn meaningful, sophisticated transition patterns between DA label pairs conditioned on speaker-change in an end-to-end way. Code is publicly available. | https://arxiv.org/abs/2004.02913v2 | https://arxiv.org/pdf/2004.02913v2.pdf | COLING 2020 8 | [
"Guokan Shang",
"Antoine Jean-Pierre Tixier",
"Michalis Vazirgiannis",
"Jean-Pierre Lorré"
] | [
"Classification",
"Dialogue Act Classification",
"Classification"
] | 1,586,131,200,000 | [
{
"code_snippet_url": null,
"description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linear chain CRFs are popular in natural language processing, whereas in image-based tasks, the graph would connect to neighboring locations in an image to enforce that they have similar predictions.\r\n\r\nImage Credit: [Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields](https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf)",
"full_name": "Conditional Random Field",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Structured Prediction** methods deal with structured outputs with multiple interdependent outputs. Below you can find a continuously updating list of structured prediction methods.",
"name": "Structured Prediction",
"parent": null
},
"name": "CRF",
"source_title": null,
"source_url": null
}
] | 131,539 |
241,314 | https://paperswithcode.com/paper/parallel-constraint-driven-inductive-logic | 2109.07132 | Parallel Constraint-Driven Inductive Logic Programming | Multi-core machines are ubiquitous. However, most inductive logic programming (ILP) approaches use only a single core, which severely limits their scalability. To address this limitation, we introduce parallel techniques based on constraint-driven ILP where the goal is to accumulate constraints to restrict the hypothesis space. Our experiments on two domains (program synthesis and inductive general game playing) show that (i) parallelisation can substantially reduce learning times, and (ii) worker communication (i.e. sharing constraints) is important for good performance. | https://arxiv.org/abs/2109.07132v1 | https://arxiv.org/pdf/2109.07132v1.pdf | null | [
"Andrew Cropper",
"Oghenejokpeme Orhobor",
"Cristian Dinu",
"Rolf Morel"
] | [
"Inductive logic programming",
"Program Synthesis"
] | 1,631,664,000,000 | [] | 32,199 |
751 | https://paperswithcode.com/paper/swarming-for-faster-convergence-in-stochastic | 1806.04207 | Swarming for Faster Convergence in Stochastic Optimization | We study a distributed framework for stochastic optimization which is
inspired by models of collective motion found in nature (e.g., swarming) with
mild communication requirements. Specifically, we analyze a scheme in which
each one of $N > 1$ independent threads, implements in a distributed and
unsynchronized fashion, a stochastic gradient-descent algorithm which is
perturbed by a swarming potential. Assuming the overhead caused by
synchronization is not negligible, we show the swarming-based approach exhibits
better performance than a centralized algorithm (based upon the average of $N$
observations) in terms of (real-time) convergence speed. We also derive an
error bound that is monotone decreasing in network size and connectivity. We
characterize the scheme's finite-time performances for both convex and
non-convex objective functions. | http://arxiv.org/abs/1806.04207v2 | http://arxiv.org/pdf/1806.04207v2.pdf | null | [
"Shi Pu",
"Alfredo Garcia"
] | [
"Stochastic Optimization"
] | 1,528,675,200,000 | [] | 190,127 |
211,367 | https://paperswithcode.com/paper/pareto-efficient-fairness-in-supervised | 2104.01634 | Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing | As algorithmic decision-making systems are becoming more pervasive, it is crucial to ensure such systems do not become mechanisms of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. Moreover, due to the inherent trade-off between fairness measures and accuracy, it is desirable to learn fairness-enhanced models without significantly compromising the accuracy. In this paper, we propose Pareto efficient Fairness (PEF) as a suitable fairness notion for supervised learning, that can ensure the optimal trade-off between overall loss and other fairness criteria. The proposed PEF notion is definition-agnostic, meaning that any well-defined notion of fairness can be reduced to the PEF notion. To efficiently find a PEF classifier, we cast the fairness-enhanced classification as a bilevel optimization problem and propose a gradient-based method that can guarantee the solution belongs to the Pareto frontier with provable guarantees for convex and non-convex objectives. We also generalize the proposed algorithmic solution to extract and trace arbitrary solutions from the Pareto frontier for a given preference over accuracy and fairness measures. This approach is generic and can be generalized to any multicriteria optimization problem to trace points on the Pareto frontier curve, which is interesting by its own right. We empirically demonstrate the effectiveness of the PEF solution and the extracted Pareto frontier on real-world datasets compared to state-of-the-art methods. | https://arxiv.org/abs/2104.01634v1 | https://arxiv.org/pdf/2104.01634v1.pdf | null | [
"Mohammad Mahdi Kamani",
"Rana Forsati",
"James Z. Wang",
"Mehrdad Mahdavi"
] | [
"Bilevel Optimization",
"Fairness"
] | 1,617,494,400,000 | [] | 123,351 |
243,015 | https://paperswithcode.com/paper/deep-structured-instance-graph-for-distilling | 2109.12862 | Deep Structured Instance Graph for Distilling Object Detectors | Effectively structuring deep knowledge plays a pivotal role in transfer from teacher to student, especially in semantic vision tasks. In this paper, we present a simple knowledge structure to exploit and encode information inside the detection system to facilitate detector knowledge distillation. Specifically, aiming at solving the feature imbalance problem while further excavating the missing relation inside semantic instances, we design a graph whose nodes correspond to instance proposal-level features and edges represent the relation between nodes. To further refine this graph, we design an adaptive background loss weight to reduce node noise and background samples mining to prune trivial edges. We transfer the entire graph as encoded knowledge representation from teacher to student, capturing local and global information simultaneously. We achieve new state-of-the-art results on the challenging COCO object detection task with diverse student-teacher pairs on both one- and two-stage detectors. We also experiment with instance segmentation to demonstrate robustness of our method. It is notable that distilled Faster R-CNN with ResNet18-FPN and ResNet50-FPN yields 38.68 and 41.82 Box AP respectively on the COCO benchmark, Faster R-CNN with ResNet101-FPN significantly achieves 43.38 AP, which outperforms ResNet152-FPN teacher about 0.7 AP. Code: https://github.com/dvlab-research/Dsig. | https://arxiv.org/abs/2109.12862v1 | https://arxiv.org/pdf/2109.12862v1.pdf | ICCV 2021 10 | [
"Yixin Chen",
"Pengguang Chen",
"Shu Liu",
"LiWei Wang",
"Jiaya Jia"
] | [
"Instance Segmentation",
"Knowledge Distillation",
"Object Detection",
"Object Detection",
"Semantic Segmentation"
] | 1,632,700,800,000 | [
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10",
"description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)",
"full_name": "RoIPool",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIPool",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
},
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22",
"description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.",
"full_name": "Faster R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Faster R-CNN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
}
] | 99,950 |
171,552 | https://paperswithcode.com/paper/cog-connecting-new-skills-to-past-experience | 2010.14500 | COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning | Reinforcement learning has been applied to a wide variety of robotics problems, but most of such applications involve collecting data from scratch for each new task. Since the amount of robot data we can collect for any single task is limited by time and cost considerations, the learned behavior is typically narrow: the policy can only execute the task in a handful of scenarios that it was trained on. What if there was a way to incorporate a large amount of prior data, either from previously solved tasks or from unsupervised or undirected environment interaction, to extend and generalize learned behaviors? While most prior work on extending robotic skills using pre-collected data focuses on building explicit hierarchies or skill decompositions, we show in this paper that we can reuse prior data to extend new skills simply through dynamic programming. We show that even when the prior data does not actually succeed at solving the new task, it can still be utilized for learning a better policy, by providing the agent with a broader understanding of the mechanics of its environment. We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task, with our hardest experimental setting involving composing four robotic skills in a row: picking, placing, drawer opening, and grasping, where a +1/0 sparse reward is provided only on task completion. We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands, and present results in both simulated and real world domains. Additional materials and source code can be found on our project website: https://sites.google.com/view/cog-rl | https://arxiv.org/abs/2010.14500v1 | https://arxiv.org/pdf/2010.14500v1.pdf | null | [
"Avi Singh",
"Albert Yu",
"Jonathan Yang",
"Jesse Zhang",
"Aviral Kumar",
"Sergey Levine"
] | [
"reinforcement-learning"
] | 1,603,756,800,000 | [] | 71,736 |
206,694 | https://paperswithcode.com/paper/conformalized-survival-analysis | 2103.09763 | Conformalized Survival Analysis | Existing survival analysis techniques heavily rely on strong modelling assumptions and are, therefore, prone to model misspecification errors. In this paper, we develop an inferential method based on ideas from conformal prediction, which can wrap around any survival prediction algorithm to produce calibrated, covariate-dependent lower predictive bounds on survival times. In the Type I right-censoring setting, when the censoring times are completely exogenous, the lower predictive bounds have guaranteed coverage in finite samples without any assumptions other than that of operating on independent and identically distributed data points. Under a more general conditionally independent censoring assumption, the bounds satisfy a doubly robust property which states the following: marginal coverage is approximately guaranteed if either the censoring mechanism or the conditional survival function is estimated well. Further, we demonstrate that the lower predictive bounds remain valid and informative for other types of censoring. The validity and efficiency of our procedure are demonstrated on synthetic data and real COVID-19 data from the UK Biobank. | https://arxiv.org/abs/2103.09763v2 | https://arxiv.org/pdf/2103.09763v2.pdf | null | [
"Emmanuel J. Candès",
"Lihua Lei",
"Zhimei Ren"
] | [
"Survival Analysis",
"Survival Prediction"
] | 1,615,939,200,000 | [] | 147,259 |
142,725 | https://paperswithcode.com/paper/will-they-won-t-they-a-very-large-dataset-for | 2005.00388 | Will-They-Won't-They: A Very Large Dataset for Stance Detection on Twitter | We present a new challenging stance detection dataset, called Will-They-Won't-They (WT-WT), which contains 51,284 tweets in English, making it by far the largest available dataset of the type. All the annotations are carried out by experts; therefore, the dataset constitutes a high-quality and reliable benchmark for future research in stance detection. Our experiments with a wide range of recent state-of-the-art stance detection systems show that the dataset poses a strong challenge to existing models in this domain. | https://arxiv.org/abs/2005.00388v1 | https://arxiv.org/pdf/2005.00388v1.pdf | ACL 2020 6 | [
"Costanza Conforti",
"Jakob Berndt",
"Mohammad Taher Pilehvar",
"Chryssi Giannitsarou",
"Flavio Toxvaerd",
"Nigel Collier"
] | [
"Stance Detection"
] | 1,588,291,200,000 | [] | 41,408 |
262,857 | https://paperswithcode.com/paper/tracking-momentary-attention-fluctuations | null | Tracking momentary attention fluctuations with an EEG-based cognitive brain-machine interface | Momentary fluctuations in attention (perceptual accuracy) correlate with neural activity fluctuations in primate visual areas. Yet, the link between such momentary neural fluctuations and attention state remains to be shown in the human brain. We investigate this link using a real-time cognitive brain machine interface (cBMI) based on steady state visually evoked potentials (SSVEPs): occipital EEG potentials evoked by rhythmically flashing stimuli. Tracking momentary fluctuations in SSVEP power, in real-time, we presented stimuli time-locked to when this power reached (predetermined) high or low thresholds. We observed a significant increase in discrimination accuracy (d') when stimuli were triggered during high (versus low) SSVEP power epochs, at the location cued for attention. Our results indicate a direct link between attention’s effects on perceptual accuracy and and neural gain in EEG-SSVEP power, in the human brain.
| https://openreview.net/forum?id=ryeT47FIIS | https://openreview.net/pdf?id=ryeT47FIIS | null | [
"Anonymous"
] | [
"EEG"
] | 1,568,160,000,000 | [] | 55,501 |
21,974 | https://paperswithcode.com/paper/retrosynthetic-reaction-prediction-using | 1706.01643 | Retrosynthetic reaction prediction using neural sequence-to-sequence models | We describe a fully data driven model that learns to perform a retrosynthetic
reaction prediction task, which is treated as a sequence-to-sequence mapping
problem. The end-to-end trained model has an encoder-decoder architecture that
consists of two recurrent neural networks, which has previously shown great
success in solving other sequence-to-sequence prediction tasks such as machine
translation. The model is trained on 50,000 experimental reaction examples from
the United States patent literature, which span 10 broad reaction types that
are commonly used by medicinal chemists. We find that our model performs
comparably with a rule-based expert system baseline model, and also overcomes
certain limitations associated with rule-based expert systems and with any
machine learning approach that contains a rule-based expert system component.
Our model provides an important first step towards solving the challenging
problem of computational retrosynthetic analysis. | http://arxiv.org/abs/1706.01643v1 | http://arxiv.org/pdf/1706.01643v1.pdf | null | [
"Bowen Liu",
"Bharath Ramsundar",
"Prasad Kawthekar",
"Jade Shi",
"Joseph Gomes",
"Quang Luu Nguyen",
"Stephen Ho",
"Jack Sloane",
"Paul Wender",
"Vijay Pande"
] | [
"Machine Translation"
] | 1,496,707,200,000 | [] | 192,631 |
4,923 | https://paperswithcode.com/paper/beyond-word-importance-contextual | 1801.05453 | Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs | The driving force behind the recent success of LSTMs has been their ability
to learn complex and non-linear relationships. Consequently, our inability to
describe these relationships has led to LSTMs being characterized as black
boxes. To this end, we introduce contextual decomposition (CD), an
interpretation algorithm for analysing individual predictions made by standard
LSTMs, without any changes to the underlying model. By decomposing the output
of a LSTM, CD captures the contributions of combinations of words or variables
to the final prediction of an LSTM. On the task of sentiment analysis with the
Yelp and SST data sets, we show that CD is able to reliably identify words and
phrases of contrasting sentiment, and how they are combined to yield the LSTM's
final prediction. Using the phrase-level labels in SST, we also demonstrate
that CD is able to successfully extract positive and negative negations from an
LSTM, something which has not previously been done. | http://arxiv.org/abs/1801.05453v2 | http://arxiv.org/pdf/1801.05453v2.pdf | ICLR 2018 1 | [
"W. James Murdoch",
"Peter J. Liu",
"Bin Yu"
] | [
"Sentiment Analysis"
] | 1,516,060,800,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] | 83,597 |
132,010 | https://paperswithcode.com/paper/an-autonomous-intrusion-detection-system | 2001.11936 | An Autonomous Intrusion Detection System Using an Ensemble of Advanced Learners | An intrusion detection system (IDS) is a vital security component of modern computer networks. With the increasing amount of sensitive services that use computer network-based infrastructures, IDSs need to be more intelligent and autonomous. Aside from autonomy, another important feature for an IDS is its ability to detect zero-day attacks. To address these issues, in this paper, we propose an IDS which reduces the amount of manual interaction and needed expert knowledge and is able to yield acceptable performance under zero-day attacks. Our approach is to use three learning techniques in parallel: gated recurrent unit (GRU), convolutional neural network as deep techniques and random forest as an ensemble technique. These systems are trained in parallel and the results are combined under two logics: majority vote and "OR" logic. We use the NSL-KDD dataset to verify the proficiency of our proposed system. Simulation results show that the system has the potential to operate with a very low technician interaction under the zero-day attacks. We achieved 87:28% accuracy on the NSL-KDD's "KDDTest+" dataset and 76:61% accuracy on the challenging "KDDTest-21" with lower training time and lower needed computational resources. | https://arxiv.org/abs/2001.11936v2 | https://arxiv.org/pdf/2001.11936v2.pdf | null | [
"Amir Andalib",
"Vahid Tabataba Vakili"
] | [
"Intrusion Detection"
] | 1,580,428,800,000 | [] | 69,801 |
40,755 | https://paperswithcode.com/paper/an-iterative-step-function-estimator-for | 1412.2129 | An iterative step-function estimator for graphons | Exchangeable graphs arise via a sampling procedure from measurable functions
known as graphons. A natural estimation problem is how well we can recover a
graphon given a single graph sampled from it. One general framework for
estimating a graphon uses step-functions obtained by partitioning the nodes of
the graph according to some clustering algorithm. We propose an iterative
step-function estimator (ISFE) that, given an initial partition, iteratively
clusters nodes based on their edge densities with respect to the previous
iteration's partition. We analyze ISFE and demonstrate its performance in
comparison with other graphon estimation techniques. | http://arxiv.org/abs/1412.2129v2 | http://arxiv.org/pdf/1412.2129v2.pdf | null | [
"Diana Cai",
"Nathanael Ackerman",
"Cameron Freer"
] | [
"Graphon Estimation"
] | 1,417,737,600,000 | [] | 176,724 |
106,279 | https://paperswithcode.com/paper/joint-reasoning-for-temporal-and-causal-1 | 1906.04941 | Joint Reasoning for Temporal and Causal Relations | Understanding temporal and causal relations between events is a fundamental natural language understanding task. Because a cause must be before its effect in time, temporal and causal relations are closely related and one relation even dictates the other one in many cases. However, limited attention has been paid to studying these two relations jointly. This paper presents a joint inference framework for them using constrained conditional models (CCMs). Specifically, we formulate the joint problem as an integer linear programming (ILP) problem, enforcing constraints inherently in the nature of time and causality. We show that the joint inference framework results in statistically significant improvement in the extraction of both temporal and causal relations from text. | https://arxiv.org/abs/1906.04941v1 | https://arxiv.org/pdf/1906.04941v1.pdf | ACL 2018 7 | [
"Qiang Ning",
"Zhili Feng",
"Hao Wu",
"Dan Roth"
] | [
"Natural Language Understanding"
] | 1,560,297,600,000 | [] | 133,095 |
214,527 | https://paperswithcode.com/paper/a-survey-of-active-learning-algorithms-for | 2104.07784 | A survey of active learning algorithms for supervised remote sensing image classification | Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user. | https://arxiv.org/abs/2104.07784v1 | https://arxiv.org/pdf/2104.07784v1.pdf | null | [
"Devis Tuia",
"Michele Volpi",
"Loris Copa",
"Mikhail Kanevski",
"Jordi Munoz-Mari"
] | [
"Active Learning",
"Classification",
"Hyperspectral Image Classification",
"Image Classification",
"Remote Sensing Image Classification"
] | 1,618,444,800,000 | [] | 9,895 |
199,404 | https://paperswithcode.com/paper/imagechd-a-3d-computed-tomography-image | 2101.10799 | ImageCHD: A 3D Computed Tomography Image Dataset for Classification of Congenital Heart Disease | Congenital heart disease (CHD) is the most common type of birth defect, which occurs 1 in every 110 births in the United States. CHD usually comes with severe variations in heart structure and great artery connections that can be classified into many types. Thus highly specialized domain knowledge and the time-consuming human process is needed to analyze the associated medical images. On the other hand, due to the complexity of CHD and the lack of dataset, little has been explored on the automatic diagnosis (classification) of CHDs. In this paper, we present ImageCHD, the first medical image dataset for CHD classification. ImageCHD contains 110 3D Computed Tomography (CT) images covering most types of CHD, which is of decent size Classification of CHDs requires the identification of large structural changes without any local tissue changes, with limited data. It is an example of a larger class of problems that are quite difficult for current machine-learning-based vision methods to solve. To demonstrate this, we further present a baseline framework for the automatic classification of CHD, based on a state-of-the-art CHD segmentation method. Experimental results show that the baseline framework can only achieve a classification accuracy of 82.0\% under a selective prediction scheme with 88.4\% coverage, leaving big room for further improvement. We hope that ImageCHD can stimulate further research and lead to innovative and generic solutions that would have an impact in multiple domains. Our dataset is released to the public compared with existing medical imaging datasets. | https://arxiv.org/abs/2101.10799v2 | https://arxiv.org/pdf/2101.10799v2.pdf | null | [
"Xiaowei Xu",
"Tianchen Wang",
"Jian Zhuang",
"Haiyun Yuan",
"Meiping Huang",
"Jianzheng Cen",
"Qianjun Jia",
"Yuhao Dong",
"Yiyu Shi"
] | [
"Classification",
"Computed Tomography (CT)",
"Classification"
] | 1,611,619,200,000 | [] | 119,238 |
76,886 | https://paperswithcode.com/paper/safe-scale-aware-feature-encoder-for-scene | 1901.05770 | SAFE: Scale Aware Feature Encoder for Scene Text Recognition | In this paper, we address the problem of having characters with different
scales in scene text recognition. We propose a novel scale aware feature
encoder (SAFE) that is designed specifically for encoding characters with
different scales. SAFE is composed of a multi-scale convolutional encoder and a
scale attention network. The multi-scale convolutional encoder targets at
extracting character features under multiple scales, and the scale attention
network is responsible for selecting features from the most relevant scale(s).
SAFE has two main advantages over the traditional single-CNN encoder used in
current state-of-the-art text recognizers. First, it explicitly tackles the
scale problem by extracting scale-invariant features from the characters. This
allows the recognizer to put more effort in handling other challenges in scene
text recognition, like those caused by view distortion and poor image quality.
Second, it can transfer the learning of feature encoding across different
character scales. This is particularly important when the training set has a
very unbalanced distribution of character scales, as training with such a
dataset will make the encoder biased towards extracting features from the
predominant scale. To evaluate the effectiveness of SAFE, we design a simple
text recognizer named scale-spatial attention network (S-SAN) that employs SAFE
as its feature encoder, and carry out experiments on six public benchmarks.
Experimental results demonstrate that S-SAN can achieve state-of-the-art (or,
in some cases, extremely competitive) performance without any post-processing. | http://arxiv.org/abs/1901.05770v1 | http://arxiv.org/pdf/1901.05770v1.pdf | null | [
"Wei Liu",
"Chaofeng Chen",
"Kwan-Yee K. Wong"
] | [
"Scene Text Recognition"
] | 1,547,683,200,000 | [] | 161,607 |
59,261 | https://paperswithcode.com/paper/feature-selection-via-sparse-approximation | 1102.02748 | Feature Selection via Sparse Approximation for Face Recognition | Inspired by biological vision systems, the over-complete local features with
huge cardinality are increasingly used for face recognition during the last
decades. Accordingly, feature selection has become more and more important and
plays a critical role for face data description and recognition. In this paper,
we propose a trainable feature selection algorithm based on the regularized
frame for face recognition. By enforcing a sparsity penalty term on the minimum
squared error (MSE) criterion, we cast the feature selection problem into a
combinatorial sparse approximation problem, which can be solved by greedy
methods or convex relaxation methods. Moreover, based on the same frame, we
propose a sparse Ho-Kashyap (HK) procedure to obtain simultaneously the optimal
sparse solution and the corresponding margin vector of the MSE criterion. The
proposed methods are used for selecting the most informative Gabor features of
face images for recognition and the experimental results on benchmark face
databases demonstrate the effectiveness of the proposed methods. | http://arxiv.org/abs/1102.2748v1 | http://arxiv.org/pdf/1102.2748v1.pdf | null | [
"Yixiong Liang",
"Lei Wang",
"Yao Xiang",
"Beiji Zou"
] | [
"Face Recognition"
] | 1,297,641,600,000 | [] | 16,373 |
60,448 | https://paperswithcode.com/paper/polarity-loss-for-zero-shot-object-detection | 1811.08982 | Polarity Loss for Zero-shot Object Detection | Conventional object detection models require large amounts of training data. In comparison, humans can recognize previously unseen objects by merely knowing their semantic description. To mimic similar behaviour, zero-shot object detection aims to recognize and localize 'unseen' object instances by using only their semantic information. The model is first trained to learn the relationships between visual and semantic domains for seen objects, later transferring the acquired knowledge to totally unseen objects. This setting gives rise to the need for correct alignment between visual and semantic concepts, so that the unseen objects can be identified using only their semantic attributes. In this paper, we propose a novel loss function called 'Polarity loss', that promotes correct visual-semantic alignment for an improved zero-shot object detection. On one hand, it refines the noisy semantic embeddings via metric learning on a 'Semantic vocabulary' of related concepts to establish a better synergy between visual and semantic domains. On the other hand, it explicitly maximizes the gap between positive and negative predictions to achieve better discrimination between seen, unseen and background objects. Our approach is inspired by embodiment theories in cognitive science, that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word vocabulary) and visual perception (seen/unseen object images). We conduct extensive evaluations on MS-COCO and Pascal VOC datasets, showing significant improvements over state of the art. | https://arxiv.org/abs/1811.08982v3 | https://arxiv.org/pdf/1811.08982v3.pdf | null | [
"Shafin Rahman",
"Salman Khan",
"Nick Barnes"
] | [
"Metric Learning",
"Object Detection",
"Object Detection",
"Zero-Shot Learning",
"Zero-Shot Object Detection"
] | 1,542,844,800,000 | [] | 150,971 |
313,911 | https://paperswithcode.com/paper/rethinking-cost-sensitive-classification-in | 2208.11739 | Rethinking Cost-sensitive Classification in Deep Learning via Adversarial Data Augmentation | Cost-sensitive classification is critical in applications where misclassification errors widely vary in cost. However, over-parameterization poses fundamental challenges to the cost-sensitive modeling of deep neural networks (DNNs). The ability of a DNN to fully interpolate a training dataset can render a DNN, evaluated purely on the training set, ineffective in distinguishing a cost-sensitive solution from its overall accuracy maximization counterpart. This necessitates rethinking cost-sensitive classification in DNNs. To address this challenge, this paper proposes a cost-sensitive adversarial data augmentation (CSADA) framework to make over-parameterized models cost-sensitive. The overarching idea is to generate targeted adversarial examples that push the decision boundary in cost-aware directions. These targeted adversarial samples are generated by maximizing the probability of critical misclassifications and used to train a model with more conservative decisions on costly pairs. Experiments on well-known datasets and a pharmacy medication image (PMI) dataset made publicly available show that our method can effectively minimize the overall cost and reduce critical errors, while achieving comparable performance in terms of overall accuracy. | https://arxiv.org/abs/2208.11739v1 | https://arxiv.org/pdf/2208.11739v1.pdf | null | [
"Qiyuan Chen",
"Raed Al Kontar",
"Maher Nouiehed",
"Jessie Yang",
"Corey Lester"
] | [
"Data Augmentation"
] | 1,661,299,200,000 | [] | 52,548 |
152,344 | https://paperswithcode.com/paper/inductive-unsupervised-domain-adaptation-for | 2006.12816 | Inductive Unsupervised Domain Adaptation for Few-Shot Classification via Clustering | Few-shot classification tends to struggle when it needs to adapt to diverse domains. Due to the non-overlapping label space between domains, the performance of conventional domain adaptation is limited. Previous work tackles the problem in a transductive manner, by assuming access to the full set of test data, which is too restrictive for many real-world applications. In this paper, we set out to tackle this issue by introducing a inductive framework, DaFeC, to improve Domain adaptation performance for Few-shot classification via Clustering. We first build a representation extractor to derive features for unlabeled data from the target domain (no test data is necessary) and then group them with a cluster miner. The generated pseudo-labeled data and the labeled source-domain data are used as supervision to update the parameters of the few-shot classifier. In order to derive high-quality pseudo labels, we propose a Clustering Promotion Mechanism, to learn better features for the target domain via Similarity Entropy Minimization and Adversarial Distribution Alignment, which are combined with a Cosine Annealing Strategy. Experiments are performed on the FewRel 2.0 dataset. Our approach outperforms previous work with absolute gains (in classification accuracy) of 4.95%, 9.55%, 3.99% and 11.62%, respectively, under four few-shot settings. | https://arxiv.org/abs/2006.12816v1 | https://arxiv.org/pdf/2006.12816v1.pdf | null | [
"Xin Cong",
"Bowen Yu",
"Tingwen Liu",
"Shiyao Cui",
"Hengzhu Tang",
"Bin Wang"
] | [
"Classification",
"Domain Adaptation",
"Classification",
"Unsupervised Domain Adaptation"
] | 1,592,870,400,000 | [
{
"code_snippet_url": null,
"description": "**Cosine Annealing** is a type of learning rate schedule that has the effect of starting with a large learning rate that is relatively rapidly decreased to a minimum value before being increased rapidly again. The resetting of the learning rate acts like a simulated restart of the learning process and the re-use of good weights as the starting point of the restart is referred to as a \"warm restart\" in contrast to a \"cold restart\" where a new set of small random numbers may be used as a starting point.\r\n\r\n$$\\eta\\_{t} = \\eta\\_{min}^{i} + \\frac{1}{2}\\left(\\eta\\_{max}^{i}-\\eta\\_{min}^{i}\\right)\\left(1+\\cos\\left(\\frac{T\\_{cur}}{T\\_{i}}\\pi\\right)\\right)\r\n$$\r\n\r\nWhere where $\\eta\\_{min}^{i}$ and $ \\eta\\_{max}^{i}$ are ranges for the learning rate, and $T\\_{cur}$ account for how many epochs have been performed since the last restart.\r\n\r\nText Source: [Jason Brownlee](https://machinelearningmastery.com/snapshot-ensemble-deep-learning-neural-network/)\r\n\r\nImage Source: [Gao Huang](https://www.researchgate.net/figure/Training-loss-of-100-layer-DenseNet-on-CIFAR10-using-standard-learning-rate-blue-and-M_fig2_315765130)",
"full_name": "Cosine Annealing",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Cosine Annealing",
"source_title": "SGDR: Stochastic Gradient Descent with Warm Restarts",
"source_url": "http://arxiv.org/abs/1608.03983v5"
}
] | 84,534 |
199,793 | https://paperswithcode.com/paper/neural-particle-image-velocimetry | 2101.11950 | Neural Particle Image Velocimetry | In the past decades, great progress has been made in the field of optical and particle-based measurement techniques for experimental analysis of fluid flows. Particle Image Velocimetry (PIV) technique is widely used to identify flow parameters from time-consecutive snapshots of particles injected into the fluid. The computation is performed as post-processing of the experimental data via proximity measure between particles in frames of reference. However, the post-processing step becomes problematic as the motility and density of the particles increases, since the data emerges in extreme rates and volumes. Moreover, existing algorithms for PIV either provide sparse estimations of the flow or require large computational time frame preventing from on-line use. The goal of this manuscript is therefore to develop an accurate on-line algorithm for estimation of the fine-grained velocity field from PIV data. As the data constitutes a pair of images, we employ computer vision methods to solve the problem. In this work, we introduce a convolutional neural network adapted to the problem, namely Volumetric Correspondence Network (VCN) which was recently proposed for the end-to-end optical flow estimation in computer vision. The network is thoroughly trained and tested on a dataset containing both synthetic and real flow data. Experimental results are analyzed and compared to that of conventional methods as well as other recently introduced methods based on neural networks. Our analysis indicates that the proposed approach provides improved efficiency also keeping accuracy on par with other state-of-the-art methods in the field. We also verify through a-posteriori tests that our newly constructed VCN schemes are reproducing well physically relevant statistics of velocity and velocity gradients. | https://arxiv.org/abs/2101.11950v1 | https://arxiv.org/pdf/2101.11950v1.pdf | null | [
"Nikolay Stulov",
"Michael Chertkov"
] | [
"Optical Flow Estimation"
] | 1,611,792,000,000 | [] | 136,461 |
162,032 | https://paperswithcode.com/paper/deep-generative-model-for-image-inpainting | 2009.01031 | Deep Generative Model for Image Inpainting with Local Binary Pattern Learning and Spatial Attention | Deep learning (DL) has demonstrated its powerful capabilities in the field of image inpainting. The DL-based image inpainting approaches can produce visually plausible results, but often generate various unpleasant artifacts, especially in the boundary and highly textured regions. To tackle this challenge, in this work, we propose a new end-to-end, two-stage (coarse-to-fine) generative model through combining a local binary pattern (LBP) learning network with an actual inpainting network. Specifically, the first LBP learning network using U-Net architecture is designed to accurately predict the structural information of the missing region, which subsequently guides the second image inpainting network for better filling the missing pixels. Furthermore, an improved spatial attention mechanism is integrated in the image inpainting network, by considering the consistency not only between the known region with the generated one, but also within the generated region itself. Extensive experiments on public datasets including CelebA-HQ, Places and Paris StreetView demonstrate that our model generates better inpainting results than the state-of-the-art competing algorithms, both quantitatively and qualitatively. The source code and trained models will be made available at https://github.com/HighwayWu/ImageInpainting. | https://arxiv.org/abs/2009.01031v1 | https://arxiv.org/pdf/2009.01031v1.pdf | null | [
"Haiwei Wu",
"Jiantao Zhou",
"Yuanman Li"
] | [
"Image Inpainting"
] | 1,599,004,800,000 | [
{
"code_snippet_url": "https://github.com/DimTrigkakis/Python-Net/blob/efb81b2f828da5a81b77a141245efdb0d5bcfbf8/incredibleMathFunctions.py#L12-L13",
"description": "**Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradients (contrast with [sigmoid activations](https://paperswithcode.com/method/sigmoid-activation)), although for half of the real line its gradient is zero.\r\n\r\n$$ f\\left(x\\right) = \\max\\left(0, x\\right) $$",
"full_name": "Rectified Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] | 112,962 |
130,040 | https://paperswithcode.com/paper/privacy-preserving-deep-learning-computation | 2001.02932 | Privacy-Preserving Deep Learning Computation for Geo-Distributed Medical Big-Data Platforms | This paper proposes a distributed deep learning framework for privacy-preserving medical data training. In order to avoid patients' data leakage in medical platforms, the hidden layers in the deep learning framework are separated and where the first layer is kept in platform and others layers are kept in a centralized server. Whereas keeping the original patients' data in local platforms maintain their privacy, utilizing the server for subsequent layers improves learning performance by using all data from each platform during training. | https://arxiv.org/abs/2001.02932v1 | https://arxiv.org/pdf/2001.02932v1.pdf | null | [
"Joohyung Jeon",
"Junhui Kim",
"Joongheon Kim",
"Kwangsoo Kim",
"Aziz Mohaisen",
"Jong-Kook Kim"
] | [
"Privacy Preserving",
"Privacy Preserving Deep Learning"
] | 1,578,528,000,000 | [] | 17,842 |
129,777 | https://paperswithcode.com/paper/general-partial-label-learning-via-dual | 2001.01290 | General Partial Label Learning via Dual Bipartite Graph Autoencoder | We formulate a practical yet challenging problem: General Partial Label Learning (GPLL). Compared to the traditional Partial Label Learning (PLL) problem, GPLL relaxes the supervision assumption from instance-level -- a label set partially labels an instance -- to group-level: 1) a label set partially labels a group of instances, where the within-group instance-label link annotations are missing, and 2) cross-group links are allowed -- instances in a group may be partially linked to the label set from another group. Such ambiguous group-level supervision is more practical in real-world scenarios as additional annotation on the instance-level is no longer required, e.g., face-naming in videos where the group consists of faces in a frame, labeled by a name set in the corresponding caption. In this paper, we propose a novel graph convolutional network (GCN) called Dual Bipartite Graph Autoencoder (DB-GAE) to tackle the label ambiguity challenge of GPLL. First, we exploit the cross-group correlations to represent the instance groups as dual bipartite graphs: within-group and cross-group, which reciprocally complements each other to resolve the linking ambiguities. Second, we design a GCN autoencoder to encode and decode them, where the decodings are considered as the refined results. It is worth noting that DB-GAE is self-supervised and transductive, as it only uses the group-level supervision without a separate offline training stage. Extensive experiments on two real-world datasets demonstrate that DB-GAE significantly outperforms the best baseline over absolute 0.159 F1-score and 24.8% accuracy. We further offer analysis on various levels of label ambiguities. | https://arxiv.org/abs/2001.01290v2 | https://arxiv.org/pdf/2001.01290v2.pdf | null | [
"Brian Chen",
"Bo Wu",
"Alireza Zareian",
"Hanwang Zhang",
"Shih-Fu Chang"
] | [
"Partial Label Learning"
] | 1,578,182,400,000 | [
{
"code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38",
"description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder).\r\n\r\nImage: [Michael Massi](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)",
"full_name": "AutoEncoder",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "AutoEncoder",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] | 3,686 |
288,463 | https://paperswithcode.com/paper/regression-or-classification-reflection-on-bp | 2204.05605 | Regression or Classification? Reflection on BP prediction from PPG data using Deep Neural Networks in the scope of practical applications | Photoplethysmographic (PPG) signals offer diagnostic potential beyond heart rate analysis or blood oxygen level monitoring. In the recent past, research focused extensively on non-invasive PPG-based approaches to blood pressure (BP) estimation. These approaches can be subdivided into regression and classification methods. The latter assign PPG signals to predefined BP intervals that represent clinically relevant ranges. The former predict systolic (SBP) and diastolic (DBP) BP as continuous variables and are of particular interest to the research community. However, the reported accuracies of BP regression methods vary widely among publications with some authors even questioning the feasibility of PPG-based BP regression altogether. In our work, we compare BP regression and classification approaches. We argue that BP classification might provide diagnostic value that is equivalent to regression in many clinically relevant scenarios while being similar or even superior in terms of performance. We compare several established neural architectures using publicly available PPG data for SBP regression and classification with and without personalization using subject-specific data. We found that classification and regression models perform similar before personalization. However, after personalization, the accuracy of classification based methods outperformed regression approaches. We conclude that BP classification might be preferable over BP regression in certain scenarios where a coarser segmentation of the BP range is sufficient. | https://arxiv.org/abs/2204.05605v1 | https://arxiv.org/pdf/2204.05605v1.pdf | null | [
"Fabian Schrumpf",
"Paul Rudi Serdack",
"Mirco Fuchs"
] | [
"Classification"
] | 1,649,721,600,000 | [] | 50,890 |
260,009 | https://paperswithcode.com/paper/generalization-guarantee-of-sgd-for-pairwise | null | Generalization Guarantee of SGD for Pairwise Learning | Recently, there is a growing interest in studying pairwise learning since it includes many important machine learning tasks as specific examples, e.g., metric learning, AUC maximization and ranking. While stochastic gradient descent (SGD) is an efficient method, there is a lacking study on its generalization behavior for pairwise learning. In this paper, we present a systematic study on the generalization analysis of SGD for pairwise learning to understand the balance between generalization and optimization. We develop a novel high-probability generalization bound for uniformly-stable algorithms to incorporate the variance information for better generalization, based on which we establish the first nonsmooth learning algorithm to achieve almost optimal high-probability and dimension-independent generalization bounds in linear time. We consider both convex and nonconvex pairwise learning problems. Our stability analysis for convex problems shows how the interpolation can help generalization. We establish a uniform convergence of gradients, and apply it to derive the first generalization bounds on population gradients for nonconvex problems. Finally, we develop better generalization bounds for gradient-dominated problems. | http://proceedings.neurips.cc/paper/2021/hash/b1301141feffabac455e1f90a7de2054-Abstract.html | http://proceedings.neurips.cc/paper/2021/file/b1301141feffabac455e1f90a7de2054-Paper.pdf | NeurIPS 2021 12 | [
"Yunwen Lei",
"Mingrui Liu",
"Yiming Ying"
] | [
"Generalization Bounds",
"Metric Learning"
] | 1,638,316,800,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] | 51,822 |
183,037 | https://paperswithcode.com/paper/uav-enabled-mobile-edge-computing-offloading | 1802.03906 | UAV-Enabled Mobile Edge Computing: Offloading Optimization and Trajectory Design | With the emergence of diverse mobile applications (such as augmented
reality), the quality of experience of mobile users is greatly limited by their
computation capacity and finite battery lifetime. Mobile edge computing (MEC)
and wireless power transfer are promising to address this issue. However, these
two techniques are susceptible to propagation delay and loss. Motivated by the
chance of short-distance line-of-sight achieved by leveraging unmanned aerial
vehicle (UAV) communications, an UAV-enabled wireless powered MEC system is
studied. A power minimization problem is formulated subject to the constraints
on the number of the computation bits and energy harvesting causality. The
problem is non-convex and challenging to tackle. An alternative optimization
algorithm is proposed based on sequential convex optimization. Simulation
results show that our proposed design is superior to other benchmark schemes
and the proposed algorithm is efficient in terms of the convergence. | http://arxiv.org/abs/1802.03906v1 | http://arxiv.org/pdf/1802.03906v1.pdf | null | [] | [
"Edge-computing"
] | 1,518,393,600,000 | [] | 2,316 |
79,644 | https://paperswithcode.com/paper/sports-field-localization-via-deep-structured | null | Sports Field Localization via Deep Structured Models | In this work, we propose a novel way of efficiently localizing a sports field from a single broadcast image of the game. Related work in this area relies on manually annotating a few key frames and extending the localization to similar images, or installing fixed specialized cameras in the stadium from which the layout of the field can be obtained. In contrast, we formulate this problem as a branch and bound inference in a Markov random field where an energy function is defined in terms of semantic cues such as the field surface, lines and circles obtained from a deep semantic segmentation network. Moreover, our approach is fully automatic and depends only on a single image from the broadcast video of the game. We demonstrate the effectiveness of our method by applying it to soccer and hockey.
| http://openaccess.thecvf.com/content_cvpr_2017/html/Homayounfar_Sports_Field_Localization_CVPR_2017_paper.html | http://openaccess.thecvf.com/content_cvpr_2017/papers/Homayounfar_Sports_Field_Localization_CVPR_2017_paper.pdf | CVPR 2017 7 | [
"Namdar Homayounfar",
"Sanja Fidler",
"Raquel Urtasun"
] | [
"Semantic Segmentation"
] | 1,498,867,200,000 | [] | 179,728 |
27,743 | https://paperswithcode.com/paper/neural-emoji-recommendation-in-dialogue | 1612.04609 | Neural Emoji Recommendation in Dialogue Systems | Emoji is an essential component in dialogues which has been broadly utilized
on almost all social platforms. It could express more delicate feelings beyond
plain texts and thus smooth the communications between users, making dialogue
systems more anthropomorphic and vivid. In this paper, we focus on
automatically recommending appropriate emojis given the contextual information
in multi-turn dialogue systems, where the challenges locate in understanding
the whole conversations. More specifically, we propose the hierarchical long
short-term memory model (H-LSTM) to construct dialogue representations,
followed by a softmax classifier for emoji classification. We evaluate our
models on the task of emoji classification in a real-world dataset, with some
further explorations on parameter sensitivity and case study. Experimental
results demonstrate that our method achieves the best performances on all
evaluation metrics. It indicates that our method could well capture the
contextual information and emotion flow in dialogues, which is significant for
emoji recommendation. | http://arxiv.org/abs/1612.04609v1 | http://arxiv.org/pdf/1612.04609v1.pdf | null | [
"Ruobing Xie",
"Zhiyuan Liu",
"Rui Yan",
"Maosong Sun"
] | [
"Classification"
] | 1,481,673,600,000 | [
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] | 134,485 |
144,043 | https://paperswithcode.com/paper/roteqnet-rotation-equivariant-network-for | 2005.04286 | RotEqNet: Rotation-Equivariant Network for Fluid Systems with Symmetric High-Order Tensors | In the recent application of scientific modeling, machine learning models are largely applied to facilitate computational simulations of fluid systems. Rotation symmetry is a general property for most symmetric fluid systems. However, in general, current machine learning methods have no theoretical way to guarantee rotational symmetry. By observing an important property of contraction and rotation operation on high-order symmetric tensors, we prove that the rotation operation is preserved via tensor contraction. Based on this theoretical justification, in this paper, we introduce Rotation-Equivariant Network (RotEqNet) to guarantee the property of rotation-equivariance for high-order tensors in fluid systems. We implement RotEqNet and evaluate our claims through four case studies on various fluid systems. The property of error reduction and rotation-equivariance is verified in these case studies. Results from the comparative study show that our method outperforms conventional methods, which rely on data augmentation. | https://arxiv.org/abs/2005.04286v1 | https://arxiv.org/pdf/2005.04286v1.pdf | null | [
"Liyao Gao",
"Yifan Du",
"Hongshan Li",
"Guang Lin"
] | [
"Data Augmentation"
] | 1,588,032,000,000 | [] | 42,288 |
62,049 | https://paperswithcode.com/paper/seq2graph-discovering-dynamic-dependencies | 1812.04448 | seq2graph: Discovering Dynamic Dependencies from Multivariate Time Series with Multi-level Attention | Discovering temporal lagged and inter-dependencies in multivariate time
series data is an important task. However, in many real-world applications,
such as commercial cloud management, manufacturing predictive maintenance, and
portfolios performance analysis, such dependencies can be non-linear and
time-variant, which makes it more challenging to extract such dependencies
through traditional methods such as Granger causality or clustering. In this
work, we present a novel deep learning model that uses multiple layers of
customized gated recurrent units (GRUs) for discovering both time lagged
behaviors as well as inter-timeseries dependencies in the form of directed
weighted graphs. We introduce a key component of Dual-purpose recurrent neural
network that decodes information in the temporal domain to discover lagged
dependencies within each time series, and encodes them into a set of vectors
which, collected from all component time series, form the informative inputs to
discover inter-dependencies. Though the discovery of two types of dependencies
are separated at different hierarchical levels, they are tightly connected and
jointly trained in an end-to-end manner. With this joint training, learning of
one type of dependency immediately impacts the learning of the other one,
leading to overall accurate dependencies discovery. We empirically test our
model on synthetic time series data in which the exact form of (non-linear)
dependencies is known. We also evaluate its performance on two real-world
applications, (i) performance monitoring data from a commercial cloud provider,
which exhibit highly dynamic, non-linear, and volatile behavior and, (ii)
sensor data from a manufacturing plant. We further show how our approach is
able to capture these dependency behaviors via intuitive and interpretable
dependency graphs and use them to generate highly accurate forecasts. | http://arxiv.org/abs/1812.04448v1 | http://arxiv.org/pdf/1812.04448v1.pdf | null | [
"Xuan-Hong Dang",
"Syed Yousaf Shah",
"Petros Zerfos"
] | [
"Time Series"
] | 1,544,140,800,000 | [] | 192,139 |
226,088 | https://paperswithcode.com/paper/c-3-compositional-counterfactual-constrastive | 2106.08914 | $C^3$: Compositional Counterfactual Constrastive Learning for Video-grounded Dialogues | Video-grounded dialogue systems aim to integrate video understanding and dialogue understanding to generate responses that are relevant to both the dialogue and video context. Most existing approaches employ deep learning models and have achieved remarkable performance, given the relatively small datasets available. However, the results are partly accomplished by exploiting biases in the datasets rather than developing multimodal reasoning, resulting in limited generalization. In this paper, we propose a novel approach of Compositional Counterfactual Contrastive Learning ($C^3$) to develop contrastive training between factual and counterfactual samples in video-grounded dialogues. Specifically, we design factual/counterfactual sampling based on the temporal steps in videos and tokens in dialogues and propose contrastive loss functions that exploit object-level or action-level variance. Different from prior approaches, we focus on contrastive hidden state representations among compositional output tokens to optimize the representation space in a generation setting. We achieved promising performance gains on the Audio-Visual Scene-Aware Dialogues (AVSD) benchmark and showed the benefits of our approach in grounding video and dialogue context. | https://arxiv.org/abs/2106.08914v1 | https://arxiv.org/pdf/2106.08914v1.pdf | null | [
"Hung Le",
"Nancy F. Chen",
"Steven C. H. Hoi"
] | [
"Contrastive Learning",
"Dialogue Understanding",
"Video Understanding"
] | 1,623,801,600,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] | 141,886 |
201,055 | https://paperswithcode.com/paper/switching-variational-auto-encoders-for-noise | 2102.04144 | Switching Variational Auto-Encoders for Noise-Agnostic Audio-visual Speech Enhancement | Recently, audio-visual speech enhancement has been tackled in the unsupervised settings based on variational auto-encoders (VAEs), where during training only clean data is used to train a generative model for speech, which at test time is combined with a noise model, e.g. nonnegative matrix factorization (NMF), whose parameters are learned without supervision. Consequently, the proposed model is agnostic to the noise type. When visual data are clean, audio-visual VAE-based architectures usually outperform the audio-only counterpart. The opposite happens when the visual data are corrupted by clutter, e.g. the speaker not facing the camera. In this paper, we propose to find the optimal combination of these two architectures through time. More precisely, we introduce the use of a latent sequential variable with Markovian dependencies to switch between different VAE architectures through time in an unsupervised manner: leading to switching variational auto-encoder (SwVAE). We propose a variational factorization to approximate the computationally intractable posterior distribution. We also derive the corresponding variational expectation-maximization algorithm to estimate the parameters of the model and enhance the speech signal. Our experiments demonstrate the promising performance of SwVAE. | https://arxiv.org/abs/2102.04144v1 | https://arxiv.org/pdf/2102.04144v1.pdf | null | [
"Mostafa Sadeghi",
"Xavier Alameda-Pineda"
] | [
"Speech Enhancement"
] | 1,612,742,400,000 | [
{
"code_snippet_url": "https://github.com/AntixK/PyTorch-VAE/blob/8700d245a9735640dda458db4cf40708caf2e77f/models/vanilla_vae.py#L8",
"description": "A **Variational Autoencoder** is a type of likelihood-based generative model. It consists of an encoder, that takes in data $x$ as input and transforms this into a latent representation $z$, and a decoder, that takes a latent representation $z$ and returns a reconstruction $\\hat{x}$. Inference is performed via variational inference to approximate the posterior of the model.",
"full_name": "Variational Autoencoder",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "VAE",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] | 74,743 |
306,801 | https://paperswithcode.com/paper/poetictts-controllable-poetry-reading-for | 2207.05549 | PoeticTTS -- Controllable Poetry Reading for Literary Studies | Speech synthesis for poetry is challenging due to specific intonation patterns inherent to poetic speech. In this work, we propose an approach to synthesise poems with almost human like naturalness in order to enable literary scholars to systematically examine hypotheses on the interplay between text, spoken realisation, and the listener's perception of poems. To meet these special requirements for literary studies, we resynthesise poems by cloning prosodic values from a human reference recitation, and afterwards make use of fine-grained prosody control to manipulate the synthetic speech in a human-in-the-loop setting to alter the recitation w.r.t. specific phenomena. We find that finetuning our TTS model on poetry captures poetic intonation patterns to a large extent which is beneficial for prosody cloning and manipulation and verify the success of our approach both in an objective evaluation as well as in human studies. | https://arxiv.org/abs/2207.05549v1 | https://arxiv.org/pdf/2207.05549v1.pdf | null | [
"Julia Koch",
"Florian Lux",
"Nadja Schauffler",
"Toni Bernhart",
"Felix Dieterle",
"Jonas Kuhn",
"Sandra Richter",
"Gabriel Viehhauser",
"Ngoc Thang Vu"
] | [
"Speech Synthesis"
] | 1,657,497,600,000 | [] | 63,743 |
75,398 | https://paperswithcode.com/paper/random-projection-in-deep-neural-networks | 1812.09489 | Random Projection in Deep Neural Networks | This work investigates the ways in which deep learning methods can benefit
from random projection (RP), a classic linear dimensionality reduction method.
We focus on two areas where, as we have found, employing RP techniques can
improve deep models: training neural networks on high-dimensional data and
initialization of network parameters. Training deep neural networks (DNNs) on
sparse, high-dimensional data with no exploitable structure implies a network
architecture with an input layer that has a huge number of weights, which often
makes training infeasible. We show that this problem can be solved by
prepending the network with an input layer whose weights are initialized with
an RP matrix. We propose several modifications to the network architecture and
training regime that makes it possible to efficiently train DNNs with learnable
RP layer on data with as many as tens of millions of input features and
training examples. In comparison to the state-of-the-art methods, neural
networks with RP layer achieve competitive performance or improve the results
on several extremely high-dimensional real-world datasets. The second area
where the application of RP techniques can be beneficial for training deep
models is weight initialization. Setting the initial weights in DNNs to
elements of various RP matrices enabled us to train residual deep networks to
higher levels of performance. | http://arxiv.org/abs/1812.09489v1 | http://arxiv.org/pdf/1812.09489v1.pdf | null | [
"Piotr Iwo Wójcik"
] | [
"Dimensionality Reduction"
] | 1,545,436,800,000 | [] | 160,899 |
200,685 | https://paperswithcode.com/paper/dual-embedding-based-neural-collaborative | 2102.02549 | Dual-embedding based Neural Collaborative Filtering for Recommender Systems | Among various recommender techniques, collaborative filtering (CF) is the most successful one. And a key problem in CF is how to represent users and items. Previous works usually represent a user (an item) as a vector of latent factors (aka. \textit{embedding}) and then model the interactions between users and items based on the representations. Despite its effectiveness, we argue that it's insufficient to yield satisfactory embeddings for collaborative filtering. Inspired by the idea of SVD++ that represents users based on themselves and their interacted items, we propose a general collaborative filtering framework named DNCF, short for Dual-embedding based Neural Collaborative Filtering, to utilize historical interactions to enhance the representation. In addition to learning the primitive embedding for a user (an item), we introduce an additional embedding from the perspective of the interacted items (users) to augment the user (item) representation. Extensive experiments on four publicly datasets demonstrated the effectiveness of our proposed DNCF framework by comparing its performance with several traditional matrix factorization models and other state-of-the-art deep learning based recommender models. | https://arxiv.org/abs/2102.02549v2 | https://arxiv.org/pdf/2102.02549v2.pdf | null | [
"Gongshan He",
"Dongxing Zhao",
"Lixin Ding"
] | [
"Collaborative Filtering",
"Recommendation Systems"
] | 1,612,396,800,000 | [] | 121,820 |
72,786 | https://paperswithcode.com/paper/new-adaptive-algorithms-for-online | null | New Adaptive Algorithms for Online Classification | We propose a general framework to online learning for classification problems with time-varying potential functions in the adversarial setting. This framework allows to design and prove relative mistake bounds for any generic loss function. The mistake bounds can be specialized for the hinge loss, allowing to recover and improve the bounds of known online classification algorithms. By optimizing the general bound we derive a new online classification algorithm, called NAROW, that hybridly uses adaptive- and fixed- second order information. We analyze the properties of the algorithm and illustrate its performance using synthetic dataset. | http://papers.nips.cc/paper/4017-new-adaptive-algorithms-for-online-classification | http://papers.nips.cc/paper/4017-new-adaptive-algorithms-for-online-classification.pdf | NeurIPS 2010 12 | [
"Francesco Orabona",
"Koby Crammer"
] | [
"Classification",
"Classification",
"online learning"
] | 1,291,161,600,000 | [] | 168,550 |
317,863 | https://paperswithcode.com/paper/automated-ischemic-stroke-lesion-segmentation | 2209.09546 | Automated ischemic stroke lesion segmentation from 3D MRI | Ischemic Stroke Lesion Segmentation challenge (ISLES 2022) offers a platform for researchers to compare their solutions to 3D segmentation of ischemic stroke regions from 3D MRIs. In this work, we describe our solution to ISLES 2022 segmentation task. We re-sample all images to a common resolution, use two input MRI modalities (DWI and ADC) and train SegResNet semantic segmentation network from MONAI. The final submission is an ensemble of 15 models (from 3 runs of 5-fold cross validation). Our solution (team name NVAUTO) achieves the top place in terms of Dice metric (0.824), and overall rank 2 (based on the combined metric ranking). | https://arxiv.org/abs/2209.09546v2 | https://arxiv.org/pdf/2209.09546v2.pdf | null | [
"Md Mahfuzur Rahman Siddique",
"Dong Yang",
"Yufan He",
"Daguang Xu",
"Andriy Myronenko"
] | [
"Ischemic Stroke Lesion Segmentation",
"Lesion Segmentation",
"Semantic Segmentation"
] | 1,663,632,000,000 | [] | 95,079 |
231,468 | https://paperswithcode.com/paper/continuous-variable-neural-network-quantum | 2107.07105 | Continuous-variable neural-network quantum states and the quantum rotor model | We initiate the study of neural-network quantum state algorithms for analyzing continuous-variable lattice quantum systems in first quantization. A simple family of continuous-variable trial wavefunctons is introduced which naturally generalizes the restricted Boltzmann machine (RBM) wavefunction introduced for analyzing quantum spin systems. By virtue of its simplicity, the same variational Monte Carlo training algorithms that have been developed for ground state determination and time evolution of spin systems have natural analogues in the continuum. We offer a proof of principle demonstration in the context of ground state determination of a stoquastic quantum rotor Hamiltonian. Results are compared against those obtained from partial differential equation (PDE) based scalable eigensolvers. This study serves as a benchmark against which future investigation of continuous-variable neural quantum states can be compared, and points to the need to consider deep network architectures and more sophisticated training algorithms. | https://arxiv.org/abs/2107.07105v1 | https://arxiv.org/pdf/2107.07105v1.pdf | null | [
"James Stokes",
"Saibal De",
"Shravan Veerapaneni",
"Giuseppe Carleo"
] | [
"Quantization",
"Variational Monte Carlo"
] | 1,626,307,200,000 | [
{
"code_snippet_url": null,
"description": "**Restricted Boltzmann Machines**, or **RBMs**, are two-layer generative neural networks that learn a probability distribution over the inputs. They are a special class of Boltzmann Machine in that they have a restricted number of connections between visible and hidden units. Every node in the visible layer is connected to every node in the hidden layer, but no nodes in the same group are connected. RBMs are usually trained using the contrastive divergence learning procedure.\r\n\r\nImage Source: [here](https://medium.com/datatype/restricted-boltzmann-machine-a-complete-analysis-part-1-introduction-model-formulation-1a4404873b3)",
"full_name": "Restricted Boltzmann Machine",
"introduced_year": 1986,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Restricted Boltzmann Machine",
"source_title": null,
"source_url": null
}
] | 141,496 |
90,646 | https://paperswithcode.com/paper/coalt-a-software-for-comparing-automatic | null | CoALT: A Software for Comparing Automatic Labelling Tools | Speech-text alignment tools are frequently used in speech technology and research. In this paper, we propose a GPL software CoALT (Comparing Automatic Labelling Tools) for comparing two automatic labellers or two speech-text alignment tools, ranking them and displaying statistics about their differences. The main feature of CoALT is that a user can define its own criteria for evaluating and comparing the speech-text alignment tools since the required quality for labelling depends on the targeted application. Beyond ranking, our tool provides useful statistics for each labeller and above all about their differences and can emphasize the drawbacks and advantages of each labeller. We have applied our software for the French and English languages but it can be used for another language by simply defining the list of the phonetic symbols and optionally a set of phonetic rules. In this paper we present the usage of the software for comparing two automatic labellers on the corpus TIMIT. Moreover, as automatic labelling tools are configurable (number of GMMs, phonetic lexicon, acoustic parameterisation), we then present how CoALT allows to determine the best parameters for our automatic labelling tool. | https://aclanthology.org/L12-1042 | https://aclanthology.org/L12-1042.pdf | LREC 2012 5 | [
"Dominique Fohr",
"Odile Mella"
] | [
"Speech Recognition",
"Speech Synthesis"
] | 1,335,830,400,000 | [] | 61,523 |
99,072 | https://paperswithcode.com/paper/twittermancer-predicting-interactions-on | 1904.11119 | TwitterMancer: Predicting Interactions on Twitter Accurately | This paper investigates the interplay between different types of user
interactions on Twitter, with respect to predicting missing or unseen
interactions. For example, given a set of retweet interactions between Twitter
users, how accurately can we predict reply interactions? Is it more difficult
to predict retweet or quote interactions between a pair of accounts? Also, how
important is time locality, and which features of interaction patterns are most
important to enable accurate prediction of specific Twitter interactions? Our
empirical study of Twitter interactions contributes initial answers to these
questions.
We have crawled an extensive dataset of Greek-speaking Twitter accounts and
their follow, quote, retweet, reply interactions over a period of a month.
We find we can accurately predict many interactions of Twitter users.
Interestingly, the most predictive features vary with the user profiles, and
are not the same across all users.
For example, for a pair of users that interact with a large number of other
Twitter users, we find that certain "higher-dimensional" triads, i.e., triads
that involve multiple types of interactions, are very informative, whereas for
less active Twitter users, certain in-degrees and out-degrees play a major
role. Finally, we provide various other insights on Twitter user behavior.
Our code and data are available at https://github.com/twittermancer/.
Keywords: Graph mining, machine learning, social media, social networks | http://arxiv.org/abs/1904.11119v1 | http://arxiv.org/pdf/1904.11119v1.pdf | null | [
"Konstantinos Sotiropoulos",
"John W. Byers",
"Polyvios Pratikakis",
"Charalampos E. Tsourakakis"
] | [
"Graph Mining"
] | 1,556,150,400,000 | [] | 47,814 |
197,744 | https://paperswithcode.com/paper/challenges-and-approaches-to-time-series | 2101.04224 | Challenges and approaches to time-series forecasting in data center telemetry: A Survey | Time-series forecasting has been an important research domain for so many years. Its applications include ECG predictions, sales forecasting, weather conditions, even COVID-19 spread predictions. These applications have motivated many researchers to figure out an optimal forecasting approach, but the modeling approach also changes as the application domain changes. This work has focused on reviewing different forecasting approaches for telemetry data predictions collected at data centers. Forecasting of telemetry data is a critical feature of network and data center management products. However, there are multiple options of forecasting approaches that range from a simple linear statistical model to high capacity deep learning architectures. In this paper, we attempted to summarize and evaluate the performance of well known time series forecasting techniques. We hope that this evaluation provides a comprehensive summary to innovate in forecasting approaches for telemetry data. | https://arxiv.org/abs/2101.04224v2 | https://arxiv.org/pdf/2101.04224v2.pdf | null | [
"Shruti Jadon",
"Jan Kanty Milczek",
"Ajit Patankar"
] | [
"Time Series",
"Time Series Forecasting"
] | 1,610,323,200,000 | [] | 130,720 |
244,872 | https://paperswithcode.com/paper/meta-attack-class-agnostic-and-model-agnostic | null | Meta-Attack: Class-Agnostic and Model-Agnostic Physical Adversarial Attack | Modern deep neural networks are often vulnerable to adversarial examples. Most exist attack methods focus on crafting adversarial examples in the digital domain, while only limited works study physical adversarial attack. However, it is more challenging to generate effective adversarial examples in the physical world due to many uncontrollable physical dynamics. Most current physical attack methods aim to generate robust physical adversarial examples by simulating all possible physical dynamics. When attacking new images or new DNN models, they require expensive manually efforts for simulating physical dynamics and considerable time for iteratively optimizing for each image. To tackle these issues, we propose a class-agnostic and model-agnostic physical adversarial attack model (Meta-Attack), which is able to not only generate robust physical adversarial examples by simulating color and shape distortions, but also generalize to attacking novel images and novel DNN models by accessing a few digital and physical images. To the best of our knowledge, this is the first work to formulate the physical attack as a few-shot learning problem. Here, the training task is redefined as the composition of a support set, a query set, and a target DNN model. Under the few- shot setting, we design a novel class-agnostic and model-agnostic meta-learning algorithm to enhance the generalization ability of our method. Extensive experimental results on two benchmark datasets with four challenging experimental settings verify the superior robustness and generalization of our method by comparing to state-of-the-art physical attack methods. | http://openaccess.thecvf.com//content/ICCV2021/html/Feng_Meta-Attack_Class-Agnostic_and_Model-Agnostic_Physical_Adversarial_Attack_ICCV_2021_paper.html | http://openaccess.thecvf.com//content/ICCV2021/papers/Feng_Meta-Attack_Class-Agnostic_and_Model-Agnostic_Physical_Adversarial_Attack_ICCV_2021_paper.pdf | ICCV 2021 10 | [
"Weiwei Feng",
"Baoyuan Wu",
"Tianzhu Zhang",
"Yong Zhang",
"Yongdong Zhang"
] | [
"Adversarial Attack",
"Few-Shot Learning",
"Meta-Learning"
] | 1,609,459,200,000 | [] | 78,015 |
292,705 | https://paperswithcode.com/paper/attribution-based-task-specific-pruning-for | 2205.04157 | Attribution-based Task-specific Pruning for Multi-task Language Models | Multi-task language models show outstanding performance for various natural language understanding tasks with only a single model. However, these language models inevitably utilize unnecessary large-scale model parameters, even when they are used for only a specific task. In this paper, we propose a novel training-free task-specific pruning method for multi-task language models. Specifically, we utilize an attribution method to compute the importance of each neuron for performing a specific task. Then, we prune task-specifically unimportant neurons using this computed importance. Experimental results on the six widely-used datasets show that our proposed pruning method significantly outperforms baseline compression methods. Also, we extend our method to be applicable in a low-resource setting, where the number of labeled datasets is insufficient. | https://arxiv.org/abs/2205.04157v1 | https://arxiv.org/pdf/2205.04157v1.pdf | null | [
"Nakyeong Yang",
"Yunah Jang",
"Hwanhee Lee",
"Seohyeong Jung",
"Kyomin Jung"
] | [
"Natural Language Understanding"
] | 1,652,054,400,000 | [] | 126,228 |
237,956 | https://paperswithcode.com/paper/deep-learning-of-transferable-mimo-channel | 2108.13831 | Deep Learning of Transferable MIMO Channel Modes for 6G V2X Communications | In the emerging high mobility Vehicle-to-Everything (V2X) communications using millimeter Wave (mmWave) and sub-THz, Multiple-Input Multiple-Output (MIMO) channel estimation is an extremely challenging task. At mmWaves/sub-THz frequencies, MIMO channels exhibit few leading paths in the space-time domain (i.e., directions or arrival/departure and delays). Algebraic Low-rank (LR) channel estimation exploits space-time channel sparsity through the computation of position-dependent MIMO channel eigenmodes leveraging recurrent training vehicle passages in the coverage cell. LR requires vehicles' geographical positions and tens to hundreds of training vehicles' passages for each position, leading to significant complexity and control signalling overhead. Here we design a DL-based LR channel estimation method to infer MIMO channel eigenmodes in V2X urban settings, starting from a single LS channel estimate and without needing vehicle's position information. Numerical results show that the proposed method attains comparable Mean Squared Error (MSE) performance as the position-based LR. Moreover, we show that the proposed model can be trained on a reference scenario and be effectively transferred to urban contexts with different space-time channel features, providing comparable MSE performance without an explicit transfer learning procedure. This result eases the deployment in arbitrary dense urban scenarios. | https://arxiv.org/abs/2108.13831v1 | https://arxiv.org/pdf/2108.13831v1.pdf | null | [
"Lorenzo Cazzella",
"Dario Tagliaferri",
"Marouan Mizmizi",
"Damiano Badini",
"Christian Mazzucco",
"Matteo Matteucci",
"Umberto Spagnolini"
] | [
"Transfer Learning"
] | 1,630,368,000,000 | [] | 158,899 |
301,905 | https://paperswithcode.com/paper/the-open-catalyst-2022-oc22-dataset-and | 2206.08917 | The Open Catalyst 2022 (OC22) Dataset and Challenges for Oxide Electrocatalysis | Computational catalysis and machine learning communities have made considerable progress in developing machine learning models for catalyst discovery and design. Yet, a general machine learning potential that spans the chemical space of catalysis is still out of reach. A significant hurdle is obtaining access to training data across a wide range of materials. One important class of materials where data is lacking are oxides, which inhibits models from studying the Oxygen Evolution Reaction and oxide electrocatalysis more generally. To address this we developed the Open Catalyst 2022(OC22) dataset, consisting of 62,521 Density Functional Theory (DFT) relaxations (~9,884,504 single point calculations) across a range of oxide materials, coverages, and adsorbates (*H, *O, *N, *C, *OOH, *OH, *OH2, *O2, *CO). We define generalized tasks to predict the total system energy that are applicable across catalysis, develop baseline performance of several graph neural networks (SchNet, DimeNet++, ForceNet, SpinConv, PaiNN, GemNet-dT, GemNet-OC), and provide pre-defined dataset splits to establish clear benchmarks for future efforts. For all tasks, we study whether combining datasets leads to better results, even if they contain different materials or adsorbates. Specifically, we jointly train models on Open Catalyst 2020 (OC20) Dataset and OC22, or fine-tune pretrained OC20 models on OC22. In the most general task, GemNet-OC sees a ~32% improvement in energy predictions through fine-tuning and a ~9% improvement in force predictions via joint training. Surprisingly, joint training on both the OC20 and much smaller OC22 datasets also improves total energy predictions on OC20 by ~19%. The dataset and baseline models are open sourced, and a public leaderboard will follow to encourage continued community developments on the total energy tasks and data. | https://arxiv.org/abs/2206.08917v1 | https://arxiv.org/pdf/2206.08917v1.pdf | null | [
"Richard Tran",
"Janice Lan",
"Muhammed Shuaibi",
"Siddharth Goyal",
"Brandon M. Wood",
"Abhishek Das",
"Javier Heras-Domingo",
"Adeesh Kolluru",
"Ammar Rizvi",
"Nima Shoghi",
"Anuroop Sriram",
"Zachary Ulissi",
"C. Lawrence Zitnick"
] | [
"Total Energy"
] | 1,655,424,000,000 | [] | 75,420 |
247,054 | https://paperswithcode.com/paper/learning-invariant-representations-on | null | Learning Invariant Representations on Multilingual Language Models for Unsupervised Cross-Lingual Transfer | Recent advances in neural modeling have produced deep multilingual language models capable of extracting cross-lingual knowledge from unparallel texts, as evidenced by their decent zero-shot transfer performance. While analyses have attributed this success to having cross-lingually shared representations, its contribution to transfer performance remains unquantified. Towards a better understanding, in this work, we first make the following observations through empirical analysis: (1) invariance of the feature representations strongly correlates with transfer performance, and (2) distributional shift in class priors between data in the source and target languages negatively affects performance---an issue that is largely overlooked in prior work. Based on our findings, we propose an unsupervised cross-lingual learning method, called importance-weighted domain adaptation (IWDA), that performs feature alignment, prior shift estimation, and correction. Experiment results demonstrate its superiority under large prior shifts. In addition, our method delivers further performance gains when combined with existing semi-supervised learning techniques. | https://openreview.net/forum?id=k7-s5HSSPE5 | https://openreview.net/pdf?id=k7-s5HSSPE5 | ICLR 2022 4 | [
"Ruicheng Xian",
"Heng Ji",
"Han Zhao"
] | [
"Cross-Lingual Transfer",
"Domain Adaptation"
] | 1,632,873,600,000 | [] | 117,013 |
293,181 | https://paperswithcode.com/paper/from-distillation-to-hard-negative-sampling | 2205.04733 | From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective | Neural retrievers based on dense representations combined with Approximate Nearest Neighbors search have recently received a lot of attention, owing their success to distillation and/or better sampling of examples for training -- while still relying on the same backbone architecture. In the meantime, sparse representation learning fueled by traditional inverted indexing techniques has seen a growing interest, inheriting from desirable IR priors such as explicit lexical matching. While some architectural variants have been proposed, a lesser effort has been put in the training of such models. In this work, we build on SPLADE -- a sparse expansion-based retriever -- and show to which extent it is able to benefit from the same training improvements as dense models, by studying the effect of distillation, hard-negative mining as well as the Pre-trained Language Model initialization. We furthermore study the link between effectiveness and efficiency, on in-domain and zero-shot settings, leading to state-of-the-art results in both scenarios for sufficiently expressive models. | https://arxiv.org/abs/2205.04733v2 | https://arxiv.org/pdf/2205.04733v2.pdf | null | [
"Thibault Formal",
"Carlos Lassance",
"Benjamin Piwowarski",
"Stéphane Clinchant"
] | [
"Language Modelling",
"Representation Learning"
] | 1,652,140,800,000 | [] | 99,811 |
103,510 | https://paperswithcode.com/paper/sequence-tagging-with-contextual-and-non | 1906.01569 | Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation | Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works best across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting. | https://arxiv.org/abs/1906.01569v1 | https://arxiv.org/pdf/1906.01569v1.pdf | ACL 2019 7 | [
"Benjamin Heinzerling",
"Michael Strube"
] | [
"Multilingual Named Entity Recognition",
"Multilingual NLP",
"Named Entity Recognition",
"Named Entity Recognition",
"Part-Of-Speech Tagging"
] | 1,559,606,400,000 | [
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L584",
"description": "The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\\Phi(x)$, where $\\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswithcode.com/method/relu) ($x\\mathbf{1}_{x>0}$). Consequently the GELU can be thought of as a smoother ReLU.\r\n\r\n$$\\text{GELU}\\left(x\\right) = x{P}\\left(X\\leq{x}\\right) = x\\Phi\\left(x\\right) = x \\cdot \\frac{1}{2}\\left[1 + \\text{erf}(x/\\sqrt{2})\\right],$$\r\nif $X\\sim \\mathcal{N}(0,1)$.\r\n\r\nOne can approximate the GELU with\r\n$0.5x\\left(1+\\tanh\\left[\\sqrt{2/\\pi}\\left(x + 0.044715x^{3}\\right)\\right]\\right)$ or $x\\sigma\\left(1.702x\\right),$\r\nbut PyTorch's exact implementation is sufficiently fast such that these approximations may be unnecessary. (See also the [SiLU](https://paperswithcode.com/method/silu) $x\\sigma(x)$ which was also coined in the paper that introduced the GELU.)\r\n\r\nGELUs are used in [GPT-3](https://paperswithcode.com/method/gpt-3), [BERT](https://paperswithcode.com/method/bert), and most other Transformers.",
"full_name": "Gaussian Error Linear Units",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Activation functions** are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.",
"name": "Activation Functions",
"parent": null
},
"name": "GELU",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v4"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "",
"description": "**WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:\r\n\r\n1. Initialize the word unit inventory with all the characters in the text.\r\n2. Build a language model on the training data using the inventory from 1.\r\n3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.\r\n4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.\r\n\r\nText: [Source](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)\r\n\r\nImage: WordPiece as used in [BERT](https://paperswithcode.com/method/bert)",
"full_name": "WordPiece",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "WordPiece",
"source_title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"source_url": "http://arxiv.org/abs/1609.08144v2"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**fastText** embeddings exploit subword information to construct word embeddings. Representations are learnt of character $n$-grams, and words represented as the sum of the $n$-gram vectors. This extends the word2vec type models with subword information. This helps the embeddings understand suffixes and prefixes. Once a word is represented using character $n$-grams, a skipgram model is trained to learn the embeddings.",
"full_name": "fastText",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Word Embeddings",
"parent": null
},
"name": "fastText",
"source_title": "Enriching Word Vectors with Subword Information",
"source_url": "http://arxiv.org/abs/1607.04606v2"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
}
] | 30,210 |
293,482 | https://paperswithcode.com/paper/group-r-cnn-for-weakly-semi-supervised-object | 2205.05920 | Group R-CNN for Weakly Semi-supervised Object Detection with Points | We study the problem of weakly semi-supervised object detection with points (WSSOD-P), where the training data is combined by a small set of fully annotated images with bounding boxes and a large set of weakly-labeled images with only a single point annotated for each instance. The core of this task is to train a point-to-box regressor on well-labeled images that can be used to predict credible bounding boxes for each point annotation. We challenge the prior belief that existing CNN-based detectors are not compatible with this task. Based on the classic R-CNN architecture, we propose an effective point-to-box regressor: Group R-CNN. Group R-CNN first uses instance-level proposal grouping to generate a group of proposals for each point annotation and thus can obtain a high recall rate. To better distinguish different instances and improve precision, we propose instance-level proposal assignment to replace the vanilla assignment strategy adopted in the original R-CNN methods. As naive instance-level assignment brings converging difficulty, we propose instance-aware representation learning which consists of instance-aware feature enhancement and instance-aware parameter generation to overcome this issue. Comprehensive experiments on the MS-COCO benchmark demonstrate the effectiveness of our method. Specifically, Group R-CNN significantly outperforms the prior method Point DETR by 3.9 mAP with 5% well-labeled images, which is the most challenging scenario. The source code can be found at https://github.com/jshilong/GroupRCNN | https://arxiv.org/abs/2205.05920v1 | https://arxiv.org/pdf/2205.05920v1.pdf | CVPR 2022 1 | [
"Shilong Zhang",
"Zhuoran Yu",
"Liyang Liu",
"Xinjiang Wang",
"Aojun Zhou",
"Kai Chen"
] | [
"Object Detection",
"Object Detection",
"Representation Learning",
"Semi-Supervised Object Detection"
] | 1,652,313,600,000 | [
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k-1}$ and $1-\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/5c0264915ab43485adc576f88971fc3d42b10445/transformer/Modules.py#L7",
"description": "**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$\r\n\r\nIf we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \\cdot k = \\sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\\sqrt{d_k}$.",
"full_name": "Scaled Dot-Product Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Mechanisms** are a component used in neural networks to model long-range interaction, for example across a text in NLP. The key idea is to build shortcuts between a context vector and the input, to allow a model to attend to different parts. Below you can find a continuously updating list of attention mechanisms.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Scaled Dot-Product Attention",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "http://arxiv.org/abs/1706.03762v5"
},
{
"code_snippet_url": null,
"description": "A **Feedforward Network**, or a **Multilayer Perceptron (MLP)**, is a neural network with solely densely connected layers. This is the classic neural network architecture of the literature. It consists of inputs $x$ passed through units $h$ (of which there can be many layers) to predict a target $y$. Activation functions are generally chosen to be non-linear to allow for flexible functional approximation.\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Feedforward Network",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Feedforward Network",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Detr**, or **Detection Transformer**, is a set-based object detector using a [Transformer](https://paperswithcode.com/method/transformer) on top of a convolutional backbone. It uses a conventional CNN backbone to learn a 2D representation of an input image. The model flattens it and supplements it with a positional encoding before passing it into a transformer encoder. A transformer decoder then takes as input a small fixed number of learned positional embeddings, which we call object queries, and additionally attends to the encoder output. We pass each output embedding of the decoder to a shared feed forward network (FFN) that predicts either a detection (class\r\nand bounding box) or a “no object” class.",
"full_name": "Detection Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Detr",
"source_title": "End-to-End Object Detection with Transformers",
"source_url": "https://arxiv.org/abs/2005.12872v3"
}
] | 6,235 |
264,807 | https://paperswithcode.com/paper/contrastive-object-level-pre-training-with | 2111.13651 | Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning | The goal of contrastive learning based pre-training is to leverage large quantities of unlabeled data to produce a model that can be readily adapted downstream. Current approaches revolve around solving an image discrimination task: given an anchor image, an augmented counterpart of that image, and some other images, the model must produce representations such that the distance between the anchor and its counterpart is small, and the distances between the anchor and the other images are large. There are two significant problems with this approach: (i) by contrasting representations at the image-level, it is hard to generate detailed object-sensitive features that are beneficial to downstream object-level tasks such as instance segmentation; (ii) the augmentation strategy of producing an augmented counterpart is fixed, making learning less effective at the later stages of pre-training. In this work, we introduce Curricular Contrastive Object-level Pre-training (CCOP) to tackle these problems: (i) we use selective search to find rough object regions and use them to build an inter-image object-level contrastive loss and an intra-image object-level discrimination loss into our pre-training objective; (ii) we present a curriculum learning mechanism that adaptively augments the generated regions, which allows the model to consistently acquire a useful learning signal, even in the later stages of pre-training. Our experiments show that our approach improves on the MoCo v2 baseline by a large margin on multiple object-level tasks when pre-training on multi-object scene image datasets. Code is available at https://github.com/ChenhongyiYang/CCOP. | https://arxiv.org/abs/2111.13651v2 | https://arxiv.org/pdf/2111.13651v2.pdf | null | [
"Chenhongyi Yang",
"Lichao Huang",
"Elliot J. Crowley"
] | [
"Contrastive Learning",
"Instance Segmentation",
"Semantic Segmentation"
] | 1,637,884,800,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Random Gaussian Blur** is an image data augmentation technique where we randomly blur the image using a Gaussian distribution.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Gaussian_blur)",
"full_name": "Random Gaussian Blur",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.",
"name": "Image Data Augmentation",
"parent": null
},
"name": "Random Gaussian Blur",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Feedforward Network**, or a **Multilayer Perceptron (MLP)**, is a neural network with solely densely connected layers. This is the classic neural network architecture of the literature. It consists of inputs $x$ passed through units $h$ (of which there can be many layers) to predict a target $y$. Activation functions are generally chosen to be non-linear to allow for flexible functional approximation.\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Feedforward Network",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Feedforward Network",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/facebookresearch/moco",
"description": "**MoCo v2** is an improved version of the [Momentum Contrast](https://paperswithcode.com/method/moco) self-supervised learning algorithm. Motivated by the findings presented in the [SimCLR](https://paperswithcode.com/method/simclr) paper, authors:\r\n\r\n- Replace the 1-layer fully connected layer with a 2-layer MLP head with [ReLU](https://paperswithcode.com/method/relu) for the unsupervised training stage.\r\n- Include blur augmentation.\r\n- Use cosine learning rate schedule.\r\n\r\nThese modifications enable MoCo to outperform the state-of-the-art SimCLR with a smaller batch size and fewer epochs.",
"full_name": "MoCo v2",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.",
"name": "Self-Supervised Learning",
"parent": null
},
"name": "MoCo v2",
"source_title": "Improved Baselines with Momentum Contrastive Learning",
"source_url": "https://arxiv.org/abs/2003.04297v1"
},
{
"code_snippet_url": "https://github.com/jefflai108/Contrastive-Predictive-Coding-PyTorch/blob/dfe687cf463668b16b0c2e205a166dbfbc9db227/src/model/model.py#L98",
"description": "**InfoNCE**, where NCE stands for Noise-Contrastive Estimation, is a type of contrastive loss function used for [self-supervised learning](https://paperswithcode.com/methods/category/self-supervised-learning).\r\n\r\nGiven a set $X = ${$x\\_{1}, \\dots, x\\_{N}$} of $N$ random samples containing one positive sample from $p\\left(x\\_{t+k}|c\\_{t}\\right)$ and $N − 1$ negative samples from the 'proposal' distribution $p\\left(x\\_{t+k}\\right)$, we optimize:\r\n\r\n$$ \\mathcal{L}\\_{N} = - \\mathbb{E}\\_{X}\\left[\\log\\frac{f\\_{k}\\left(x\\_{t+k}, c\\_{t}\\right)}{\\sum\\_{x\\_{j}\\in{X}}f\\_{k}\\left(x\\_{j}, c\\_{t}\\right)}\\right] $$\r\n\r\nOptimizing this loss will result in $f\\_{k}\\left(x\\_{t+k}, c\\_{t}\\right)$ estimating the density ratio, which is:\r\n\r\n$$ f\\_{k}\\left(x\\_{t+k}, c\\_{t}\\right) \\propto \\frac{p\\left(x\\_{t+k}|c\\_{t}\\right)}{p\\left(x\\_{t+k}\\right)} $$",
"full_name": "InfoNCE",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "InfoNCE",
"source_title": "Representation Learning with Contrastive Predictive Coding",
"source_url": "http://arxiv.org/abs/1807.03748v2"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": null,
"description": "**Selective Search** is a region proposal algorithm for object detection tasks. It starts by over-segmenting the image based on intensity of the pixels using a graph-based segmentation method by Felzenszwalb and Huttenlocher. Selective Search then takes these oversegments as initial input and performs the following steps\r\n\r\n1. Add all bounding boxes corresponding to segmented parts to the list of regional proposals\r\n2. Group adjacent segments based on similarity\r\n3. Go to step 1\r\n\r\nAt each iteration, larger segments are formed and added to the list of region proposals. Hence we create region proposals from smaller segments to larger segments in a bottom-up approach. This is what we mean by computing “hierarchical” segmentations using Felzenszwalb and Huttenlocher’s oversegments.",
"full_name": "Selective Search",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "Selective Search",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/facebookresearch/moco/blob/3631be074a0a14ab85c206631729fe035e54b525/moco/builder.py#L6",
"description": "**MoCo**, or **Momentum Contrast**, is a self-supervised learning algorithm with a contrastive loss. \r\n\r\nContrastive loss methods can be thought of as building dynamic dictionaries. The \"keys\" (tokens) in the dictionary are sampled from data (e.g., images or patches) and are represented by an encoder network. Unsupervised learning trains encoders to perform dictionary look-up: an encoded “query” should be similar to its matching key and dissimilar to others. Learning is formulated as minimizing a contrastive loss. \r\n\r\nMoCo can be viewed as a way to build large and consistent dictionaries for unsupervised learning with a contrastive loss. In MoCo, we maintain the dictionary as a queue of data samples: the encoded representations of the current mini-batch are enqueued, and the oldest are dequeued. The queue decouples the dictionary size from the mini-batch size, allowing it to be large. Moreover, as the dictionary keys come from the preceding several mini-batches, a slowly progressing key encoder, implemented as a momentum-based moving average of the query encoder, is proposed to maintain consistency.",
"full_name": "Momentum Contrast",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.",
"name": "Self-Supervised Learning",
"parent": null
},
"name": "MoCo",
"source_title": "Momentum Contrast for Unsupervised Visual Representation Learning",
"source_url": "https://arxiv.org/abs/1911.05722v3"
}
] | 98,570 |
292,881 | https://paperswithcode.com/paper/on-generalisability-of-machine-learning-based | 2205.04112 | On Generalisability of Machine Learning-based Network Intrusion Detection Systems | Many of the proposed machine learning (ML) based network intrusion detection systems (NIDSs) achieve near perfect detection performance when evaluated on synthetic benchmark datasets. Though, there is no record of if and how these results generalise to other network scenarios, in particular to real-world networks. In this paper, we investigate the generalisability property of ML-based NIDSs by extensively evaluating seven supervised and unsupervised learning models on four recently published benchmark NIDS datasets. Our investigation indicates that none of the considered models is able to generalise over all studied datasets. Interestingly, our results also indicate that the generalisability has a high degree of asymmetry, i.e., swapping the source and target domains can significantly change the classification performance. Our investigation also indicates that overall, unsupervised learning methods generalise better than supervised learning models in our considered scenarios. Using SHAP values to explain these results indicates that the lack of generalisability is mainly due to the presence of strong correspondence between the values of one or more features and Attack/Benign classes in one dataset-model combination and its absence in other datasets that have different feature distributions. | https://arxiv.org/abs/2205.04112v1 | https://arxiv.org/pdf/2205.04112v1.pdf | null | [
"Siamak Layeghy",
"Marius Portmann"
] | [
"Intrusion Detection",
"Network Intrusion Detection"
] | 1,652,054,400,000 | [
{
"code_snippet_url": "https://github.com/slundberg/shap",
"description": "**SHAP**, or **SHapley Additive exPlanations**, is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. Shapley values are approximating using Kernel SHAP, which uses a weighting kernel for the approximation, and DeepSHAP, which uses DeepLift to approximate them.",
"full_name": "Shapley Additive Explanations",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Interpretability Methods** seek to explain the predictions made by neural networks by introducing mechanisms to enduce or enforce interpretability. For example, LIME approximates the neural network with a locally interpretable model. Below you can find a continuously updating list of interpretability methods.",
"name": "Interpretability",
"parent": null
},
"name": "SHAP",
"source_title": "A Unified Approach to Interpreting Model Predictions",
"source_url": "http://arxiv.org/abs/1705.07874v2"
}
] | 139,621 |
7,950 | https://paperswithcode.com/paper/a-temporally-aware-interpolation-network-for | 1803.07218 | A Temporally-Aware Interpolation Network for Video Frame Inpainting | We propose the first deep learning solution to video frame inpainting, a
challenging instance of the general video inpainting problem with applications
in video editing, manipulation, and forensics. Our task is less ambiguous than
frame interpolation and video prediction because we have access to both the
temporal context and a partial glimpse of the future, allowing us to better
evaluate the quality of a model's predictions objectively. We devise a pipeline
composed of two modules: a bidirectional video prediction module, and a
temporally-aware frame interpolation module. The prediction module makes two
intermediate predictions of the missing frames, one conditioned on the
preceding frames and the other conditioned on the following frames, using a
shared convolutional LSTM-based encoder-decoder. The interpolation module
blends the intermediate predictions to form the final result. Specifically, it
utilizes time information and hidden activations from the video prediction
module to resolve disagreements between the predictions. Our experiments
demonstrate that our approach produces more accurate and qualitatively
satisfying results than a state-of-the-art video prediction method and many
strong frame inpainting baselines. | http://arxiv.org/abs/1803.07218v2 | http://arxiv.org/pdf/1803.07218v2.pdf | null | [
"Ximeng Sun",
"Ryan Szeto",
"Jason J. Corso"
] | [
"Video Inpainting",
"Video Prediction"
] | 1,521,504,000,000 | [] | 77,074 |
293,315 | https://paperswithcode.com/paper/generalized-fast-multichannel-nonnegative | 2205.05330 | Generalized Fast Multichannel Nonnegative Matrix Factorization Based on Gaussian Scale Mixtures for Blind Source Separation | This paper describes heavy-tailed extensions of a state-of-the-art versatile blind source separation method called fast multichannel nonnegative matrix factorization (FastMNMF) from a unified point of view. The common way of deriving such an extension is to replace the multivariate complex Gaussian distribution in the likelihood function with its heavy-tailed generalization, e.g., the multivariate complex Student's t and leptokurtic generalized Gaussian distributions, and tailor-make the corresponding parameter optimization algorithm. Using a wider class of heavy-tailed distributions called a Gaussian scale mixture (GSM), i.e., a mixture of Gaussian distributions whose variances are perturbed by positive random scalars called impulse variables, we propose GSM-FastMNMF and develop an expectationmaximization algorithm that works even when the probability density function of the impulse variables have no analytical expressions. We show that existing heavy-tailed FastMNMF extensions are instances of GSM-FastMNMF and derive a new instance based on the generalized hyperbolic distribution that include the normal-inverse Gaussian, Student's t, and Gaussian distributions as the special cases. Our experiments show that the normalinverse Gaussian FastMNMF outperforms the state-of-the-art FastMNMF extensions and ILRMA model in speech enhancement and separation in terms of the signal-to-distortion ratio. | https://arxiv.org/abs/2205.05330v1 | https://arxiv.org/pdf/2205.05330v1.pdf | null | [
"Mathieu Fontaine",
"Kouhei Sekiguchi",
"Aditya Nugraha",
"Yoshiaki Bando",
"Kazuyoshi Yoshii"
] | [
"Speech Enhancement"
] | 1,652,227,200,000 | [] | 62,217 |
2,583 | https://paperswithcode.com/paper/a-double-deep-spatio-angular-learning | 1805.10078 | A Double-Deep Spatio-Angular Learning Framework for Light Field based Face Recognition | Face recognition has attracted increasing attention due to its wide range of
applications, but it is still challenging when facing large variations in the
biometric data characteristics. Lenslet light field cameras have recently come
into prominence to capture rich spatio-angular information, thus offering new
possibilities for advanced biometric recognition systems. This paper proposes a
double-deep spatio-angular learning framework for light field based face
recognition, which is able to learn both texture and angular dynamics in
sequence using convolutional representations; this is a novel recognition
framework that has never been proposed before for either face recognition or
any other visual recognition task. The proposed double-deep learning framework
includes a long short-term memory (LSTM) recurrent network whose inputs are
VGG-Face descriptions that are computed using a VGG-Very-Deep-16 convolutional
neural network (CNN). The VGG-16 network uses different face viewpoints
rendered from a full light field image, which are organised as a pseudo-video
sequence. A comprehensive set of experiments has been conducted with the
IST-EURECOM light field face database, for varied and challenging recognition
tasks. Results show that the proposed framework achieves superior face
recognition performance when compared to the state-of-the-art. | http://arxiv.org/abs/1805.10078v3 | http://arxiv.org/pdf/1805.10078v3.pdf | null | [
"Alireza Sepas-Moghaddam",
"Mohammad A. Haque",
"Paulo Lobato Correia",
"Kamal Nasrollahi",
"Thomas B. Moeslund",
"Fernando Pereira"
] | [
"Face Recognition"
] | 1,527,206,400,000 | [
{
"code_snippet_url": null,
"description": "",
"full_name": "VGG-16",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutional Neural Networks** are used to extract features from images (and videos), employing convolutions as their primary operator. Below you can find a continuously updating list of convolutional neural networks.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "VGG-16",
"source_title": "Very Deep Convolutional Networks for Large-Scale Image Recognition",
"source_url": "http://arxiv.org/abs/1409.1556v6"
}
] | 75,209 |
181,256 | https://paperswithcode.com/paper/multiple-time-series-ising-model-for | 1611.08088 | Multiple Time Series Ising Model for Financial Market Simulations | In this paper we propose an Ising model which simulates multiple financial
time series. Our model introduces the interaction which couples to spins of
other systems. Simulations from our model show that time series exhibit the
volatility clustering that is often observed in the real financial markets.
Furthermore we also find non-zero cross correlations between the volatilities
from our model. Thus our model can simulate stock markets where volatilities of
stocks are mutually correlated. | http://arxiv.org/abs/1611.08088v1 | http://arxiv.org/pdf/1611.08088v1.pdf | null | [] | [
"Time Series"
] | 1,479,945,600,000 | [] | 171,106 |
230,567 | https://paperswithcode.com/paper/a-survey-on-low-resource-neural-machine | 2107.04239 | A Survey on Low-Resource Neural Machine Translation | Neural approaches have achieved state-of-the-art accuracy on machine translation but suffer from the high cost of collecting large scale parallel data. Thus, a lot of research has been conducted for neural machine translation (NMT) with very limited parallel data, i.e., the low-resource setting. In this paper, we provide a survey for low-resource NMT and classify related works into three categories according to the auxiliary data they used: (1) exploiting monolingual data of source and/or target languages, (2) exploiting data from auxiliary languages, and (3) exploiting multi-modal data. We hope that our survey can help researchers to better understand this field and inspire them to design better algorithms, and help industry practitioners to choose appropriate algorithms for their applications. | https://arxiv.org/abs/2107.04239v1 | https://arxiv.org/pdf/2107.04239v1.pdf | null | [
"Rui Wang",
"Xu Tan",
"Renqian Luo",
"Tao Qin",
"Tie-Yan Liu"
] | [
"Low-Resource Neural Machine Translation",
"Machine Translation"
] | 1,625,788,800,000 | [] | 157,275 |
118,278 | https://paperswithcode.com/paper/unsupervised-representation-for-ehr-signals | 1910.01803 | Unsupervised Representation for EHR Signals and Codes as Patient Status Vector | Effective modeling of electronic health records presents many challenges as they contain large amounts of irregularity most of which are due to the varying procedures and diagnosis a patient may have. Despite the recent progress in machine learning, unsupervised learning remains largely at open, especially in the healthcare domain. In this work, we present a two-step unsupervised representation learning scheme to summarize the multi-modal clinical time series consisting of signals and medical codes into a patient status vector. First, an auto-encoder step is used to reduce sparse medical codes and clinical time series into a distributed representation. Subsequently, the concatenation of the distributed representations is further fine-tuned using a forecasting task. We evaluate the usefulness of the representation on two downstream tasks: mortality and readmission. Our proposed method shows improved generalization performance for both short duration ICU visits and long duration ICU visits. | https://arxiv.org/abs/1910.01803v1 | https://arxiv.org/pdf/1910.01803v1.pdf | null | [
"Sajad Darabi",
"Mohammad Kachuee",
"Majid Sarrafzadeh"
] | [
"Representation Learning",
"Time Series"
] | 1,570,147,200,000 | [] | 140,956 |
75,823 | https://paperswithcode.com/paper/no-reference-color-image-quality-assessment | 1812.10695 | No-Reference Color Image Quality Assessment: From Entropy to Perceptual Quality | This paper presents a high-performance general-purpose no-reference (NR)
image quality assessment (IQA) method based on image entropy. The image
features are extracted from two domains. In the spatial domain, the mutual
information between the color channels and the two-dimensional entropy are
calculated. In the frequency domain, the two-dimensional entropy and the mutual
information of the filtered sub-band images are computed as the feature set of
the input color image. Then, with all the extracted features, the support
vector classifier (SVC) for distortion classification and support vector
regression (SVR) are utilized for the quality prediction, to obtain the final
quality assessment score. The proposed method, which we call entropy-based
no-reference image quality assessment (ENIQA), can assess the quality of
different categories of distorted images, and has a low complexity. The
proposed ENIQA method was assessed on the LIVE and TID2013 databases and showed
a superior performance. The experimental results confirmed that the proposed
ENIQA method has a high consistency of objective and subjective assessment on
color images, which indicates the good overall performance and generalization
ability of ENIQA. The source code is available on github
https://github.com/jacob6/ENIQA. | http://arxiv.org/abs/1812.10695v1 | http://arxiv.org/pdf/1812.10695v1.pdf | null | [
"Xiaoqiao Chen",
"Qingyi Zhang",
"Manhui Lin",
"Guangyi Yang",
"Chu He"
] | [
"Image Quality Assessment",
"No-Reference Image Quality Assessment"
] | 1,545,868,800,000 | [] | 27,746 |
3,503 | https://paperswithcode.com/paper/understanding-and-improving-deep-neural | 1805.07020 | Understanding and Improving Deep Neural Network for Activity Recognition | Activity recognition has become a popular research branch in the field of
pervasive computing in recent years. A large number of experiments can be
obtained that activity sensor-based data's characteristic in activity
recognition is variety, volume, and velocity. Deep learning technology,
together with its various models, is one of the most effective ways of working
on activity data. Nevertheless, there is no clear understanding of why it
performs so well or how to make it more effective. In order to solve this
problem, first, we applied convolution neural network on Human Activity
Recognition Using Smart phones Data Set. Second, we realized the visualization
of the sensor-based activity's data features extracted from the neural network.
Then we had in-depth analysis of the visualization of features, explored the
relationship between activity and features, and analyzed how Neural Networks
identify activity based on these features. After that, we extracted the
significant features related to the activities and sent the features to the
DNN-based fusion model, which improved the classification rate to 96.1%. This
is the first work to our knowledge that visualizes abstract sensor-based
activity data features. Based on the results, the method proposed in the paper
promises to realize the accurate classification of sensor- based activity
recognition. | http://arxiv.org/abs/1805.07020v1 | http://arxiv.org/pdf/1805.07020v1.pdf | null | [
"Li Xue",
"Si Xiandong",
"Nie Lanshun",
"Li Jiazhen",
"Ding Renjie",
"Zhan Dechen",
"Chu Dianhui"
] | [
"Activity Recognition",
"Classification",
"Human Activity Recognition"
] | 1,526,601,600,000 | [
{
"code_snippet_url": null,
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] | 82,791 |