paper_id
stringlengths
10
10
paper_url
stringlengths
37
80
title
stringlengths
4
518
abstract
stringlengths
3
7.27k
arxiv_id
stringlengths
9
16
url_abs
stringlengths
18
601
url_pdf
stringlengths
21
601
aspect_tasks
sequence
aspect_methods
sequence
aspect_datasets
sequence
21mBprZ3au
https://paperswithcode.com/paper/the-variational-fair-autoencoder
The Variational Fair Autoencoder
We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation. Any subsequent processing, such as classification, can then be performed on this purged latent representation. To remove any remaining dependencies we incorporate an additional penalty term based on the "Maximum Mean Discrepancy" (MMD) measure. We discuss how these architectures can be efficiently trained on data and show in experiments that this method is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.
1511.00830
http://arxiv.org/abs/1511.00830v6
http://arxiv.org/pdf/1511.00830v6.pdf
[ "Sentiment Analysis" ]
[]
[ "Multi-Domain Sentiment Dataset" ]
mzmZPxHbHZ
https://paperswithcode.com/paper/breaking-the-softmax-bottleneck-a-high-rank
Breaking the Softmax Bottleneck: A High-Rank RNN Language Model
We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points in perplexity.
1711.03953
http://arxiv.org/abs/1711.03953v4
http://arxiv.org/pdf/1711.03953v4.pdf
[ "Language Modelling", "Word Embeddings" ]
[ "Sigmoid Activation", "Tanh Activation", "Dropout", "Temporal Activation Regularization", "Activation Regularization", "Weight Tying", "Embedding Dropout", "Variational Dropout", "LSTM", "DropConnect", "AWD-LSTM", "Mixture of Softmaxes", "Softmax" ]
[ "Penn Treebank (Word Level)", "WikiText-2" ]
4sgwBMIVZJ
https://paperswithcode.com/paper/partially-shuffling-the-training-data-to-1
Partially Shuffling the Training Data to Improve Language Models
Although SGD requires shuffling the training data between epochs, currently none of the word-level language modeling systems do this. Naively shuffling all sentences in the training data would not permit the model to learn inter-sentence dependencies. Here we present a method that partially shuffles the training data between epochs. This method makes each batch random, while keeping most sentence ordering intact. It achieves new state of the art results on word-level language modeling on both the Penn Treebank and WikiText-2 datasets.
1903.04167
http://arxiv.org/abs/1903.04167v2
http://arxiv.org/pdf/1903.04167v2.pdf
[ "Language Modelling", "Sentence Ordering" ]
[ "SGD" ]
[ "Penn Treebank (Word Level)", "WikiText-2" ]
wjL-ZZVuIm
https://paperswithcode.com/paper/dynamic-evaluation-of-neural-sequence-models
Dynamic Evaluation of Neural Sequence Models
We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.
1709.07432
http://arxiv.org/abs/1709.07432v2
http://arxiv.org/pdf/1709.07432v2.pdf
[ "Language Modelling" ]
[]
[ "Text8", "Penn Treebank (Word Level)", "WikiText-2", "Hutter Prize" ]
Afw7UcYbWU
https://paperswithcode.com/paper/direct-output-connection-for-a-high-rank
Direct Output Connection for a High-Rank Language Model
This paper proposes a state-of-the-art recurrent neural network (RNN) language model that combines probability distributions computed not only from a final RNN layer but also from middle layers. Our proposed method raises the expressive power of a language model based on the matrix factorization interpretation of language modeling introduced by Yang et al. (2018). The proposed method improves the current state-of-the-art language model and achieves the best score on the Penn Treebank and WikiText-2, which are the standard benchmark datasets. Moreover, we indicate our proposed method contributes to two application tasks: machine translation and headline generation. Our code is publicly available at: https://github.com/nttcslab-nlp/doc_lm.
1808.10143
http://arxiv.org/abs/1808.10143v2
http://arxiv.org/pdf/1808.10143v2.pdf
[ "Constituency Parsing", "Language Modelling", "Machine Translation" ]
[]
[ "Penn Treebank (Word Level)", "WikiText-2", "Penn Treebank" ]
nCrJQdu1BQ
https://paperswithcode.com/paper/on-the-state-of-the-art-of-evaluation-in
On the State of the Art of Evaluation in Neural Language Models
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
1707.05589
http://arxiv.org/abs/1707.05589v2
http://arxiv.org/pdf/1707.05589v2.pdf
[ "Language Modelling" ]
[]
[ "WikiText-2" ]