|
{ |
|
"paper_id": "D19-1048", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:59:36.800119Z" |
|
}, |
|
"title": "Latent-Variable Generative Models for Data-Efficient Text Classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoan", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Chicago", |
|
"location": { |
|
"postCode": "60637", |
|
"settlement": "Chicago", |
|
"region": "IL", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "xiaoanding@uchicago.edu" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Toyota Technological Institute at Chicago", |
|
"location": { |
|
"postCode": "60637", |
|
"settlement": "Chicago", |
|
"region": "IL", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "kgimpel@ttic.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Generative classifiers offer potential advantages over their discriminative counterparts, namely in the areas of data efficiency, robustness to data shift and adversarial examples, and zero-shot learning (Ng and Jordan, 2002; Yogatama et al., 2017; Lewis and Fan, 2019). In this paper, we improve generative text classifiers by introducing discrete latent variables into the generative story, and explore several graphical model configurations. We parameterize the distributions using standard neural architectures used in conditional language modeling and perform learning by directly maximizing the log marginal likelihood via gradient-based optimization, which avoids the need to do expectation-maximization. We empirically characterize the performance of our models on six text classification datasets. The choice of where to include the latent variable has a significant impact on performance, with the strongest results obtained when using the latent variable as an auxiliary conditioning variable in the generation of the textual input. This model consistently outperforms both the generative and discriminative classifiers in small-data settings. We analyze our model by using it for controlled generation, finding that the latent variable captures interpretable properties of the data, even with very small training sets.", |
|
"pdf_parse": { |
|
"paper_id": "D19-1048", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Generative classifiers offer potential advantages over their discriminative counterparts, namely in the areas of data efficiency, robustness to data shift and adversarial examples, and zero-shot learning (Ng and Jordan, 2002; Yogatama et al., 2017; Lewis and Fan, 2019). In this paper, we improve generative text classifiers by introducing discrete latent variables into the generative story, and explore several graphical model configurations. We parameterize the distributions using standard neural architectures used in conditional language modeling and perform learning by directly maximizing the log marginal likelihood via gradient-based optimization, which avoids the need to do expectation-maximization. We empirically characterize the performance of our models on six text classification datasets. The choice of where to include the latent variable has a significant impact on performance, with the strongest results obtained when using the latent variable as an auxiliary conditioning variable in the generation of the textual input. This model consistently outperforms both the generative and discriminative classifiers in small-data settings. We analyze our model by using it for controlled generation, finding that the latent variable captures interpretable properties of the data, even with very small training sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The most widely-used neural network classifiers are discriminative, that is, they are trained to explicitly favor the gold standard label over others. The alternative is to design classifiers that are generative; these follow a generative story that includes predicting the label and then the data conditioned on the label. Discriminative classifiers are preferred because they generally outperform their generative counterparts on standard benchmarks. These benchmarks typically assume large annotated training sets, little mismatch between training and test distributions, relatively clean data, and a lack of adversarial examples (Zue et al., 1990; Marcus et al., 1993; Deng et al., 2009; Lin et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 633, |
|
"end": 651, |
|
"text": "(Zue et al., 1990;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 652, |
|
"end": 672, |
|
"text": "Marcus et al., 1993;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 673, |
|
"end": 691, |
|
"text": "Deng et al., 2009;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 692, |
|
"end": 709, |
|
"text": "Lin et al., 2014)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, when conditions are not ideal for discriminative classifiers, generative classifiers can actually perform better. Ng and Jordan (2002) showed theoretically that linear generative classifiers approach their asymptotic error rates more rapidly than discriminative ones. Based on this finding, Yogatama et al. (2017) empirically characterized the performance of RNN-based generative classifiers, showing advantages in sample complexity, zero-shot learning, and continual learning. Recent work in generative question answering models (Lewis and Fan, 2019) demonstrates better robustness to biased training data and adversarial testing data than state-of-the-art discriminative models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 143, |
|
"text": "Ng and Jordan (2002)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 322, |
|
"text": "Yogatama et al. (2017)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 539, |
|
"end": 560, |
|
"text": "(Lewis and Fan, 2019)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we focus on settings with small amounts of annotated data and improve generative text classifiers by introducing discrete latent variables into the generative story. Accordingly, the training objective is changed to log marginal likelihood of the data as we marginalize out the latent variables during learning. We parameterize the distributions with standard neural architectures used in conditional language models and include the latent variable by concatenating its embedding to the RNN hidden state before computing the softmax over words. While traditional latent variable learning in NLP uses the expectationmaximization (EM) algorithm (Dempster et al., 1977) , we instead simply perform direct optimization of the log marginal likelihood using gradientbased methods. At inference time, we similarly marginalize out the latent variables while maximizing over the label.", |
|
"cite_spans": [ |
|
{ |
|
"start": 658, |
|
"end": 681, |
|
"text": "(Dempster et al., 1977)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We characterize the performance of our latentvariable generative classifiers on six text classification datasets introduced by Zhang et al. (2015) . We observe that introducing latent variables leads to large and consistent performance gains in the small-data regime, though the benefits of adding latent variables reduce as the training set becomes larger.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 146, |
|
"text": "Zhang et al. (2015)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To better understand the modeling space of latent variable classifiers, we explore several graphical model configurations. Our experimental results demonstrate the importance of including a direct dependency between the label and the input in the model. We study the relationship between the label, latent, and input variables in our strongest latent generative classifier, finding that the label and latent capture complementary information about the input. Some information about the textual input is encoded in the latent variable to help with generation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We analyze our latent generative model by generating samples when controlling the label and latent variables. Even with small training data, the samples capture the salient characteristics of the label space while also conforming to the values of the latent variable, some of which we find to be interpretable. While discriminative classifiers excel at separating examples according to labels, generative classifiers offer certain advantages in practical settings that benefit from a richer understanding of the data-generating process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We begin by defining our baseline generative and discriminative text classifiers for document classification. Our models are essentially the same as those from Yogatama et al. 2017; we describe them in detail here because our latent-variable models will extend them. 1 Our classifiers are trained on datasets D of annotated documents. Each instance x, y \u2208 D consists of a textual input x = {x 1 , x 2 , ..., x T }, where T is the length of the document, and a label y \u2208 Y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1 The main difference between our baselines and the models in Yogatama et al. (2017) are: (1) their discriminative classifier uses an LSTM with \"peephole connections\"; (2) they evaluate a label-based generative classifier (\"Independent LSTMs\") that uses a separate LSTM for each label. They also evaluate the model we described here, which they call \"Shared LSTMs\". Their Independent and Shared LSTMs perform similarly across training set sizes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 84, |
|
"text": "Yogatama et al. (2017)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The discriminative classifier is trained to maximize the conditional probability of labels given documents:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "x,y \u2208D log p(y | x). For our discriminative model, we encode a document x using an LSTM (Hochreiter and Schmidhuber, 1997) , and use the average of the LSTM hidden states as the document representation. The classifier is built by adding a softmax layer on top of the LSTM state average to get a probability distribution over labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 122, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The generative classifier is trained to maximize the joint probability of documents and labels:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "x,y \u2208D log p(x, y). The generative classifier uses the following factorization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(x, y) = p(x | y)p(y)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We parameterize log p(x | y) as a conditional LSTM language model using the standard sequential factorization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "log p(x | y) = T t=1 log p(x t | x <t , y)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We define a label embedding matrix", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "V Y \u2208 R d 1 \u00d7|Y| .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To predict the next word x t+1 , we concatenate the LSTM hidden state h t with the label embedding v y (a column of V Y ), and feed it to a softmax layer to get the probability distribution over the vocabulary. More details about the factorization and parameterization are discussed in Section 3. The label prior p(y) is acquired via maximum likelihood estimation and fixed during training of the remaining parameters. At inference time, the prediction is made by maximizing p(y | x) with respect to y for the discriminative classifier and maximizing p(x | y)p(y) for the generative classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative and Generative Text Classifiers", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We now introduce discrete latent variables into the standard generative classifier as shown in Figure 1 . We refer to the latent-variable model as an auxiliary latent generative model, as we expect the latent variable to contain auxiliary information that can help with the generation of the input. Following the graphical model structure in Figure 1 (b), we factorize the joint probability p(x, y, c) as follows: We parameterize p \u0398 (x | c, y) as a conditional LSTM language model using the same factorization as above:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 103, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 350, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(x, y, c) = p \u0398 (x | c, y)p \u03a6 (c)p \u03a8 (y)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "log p \u0398 (x | c, y) = T t=1 log p \u0398 (x t | x <t , c, y) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where \u0398 is the set of parameters of the language model. As in the generative classifier, we use a label embedding matrix V Y . In addition, we define a latent variable embedding matrix", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "V C \u2208 R d 2 \u00d7|C|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where C is the set of values for the discrete latent variable. Also like the generative classifier, we use an LSTM to predict each word with a softmax layer:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "p \u0398 (x t | x <t , c, y) \u221d exp{u xt ([h t ; v y ; v c ]) + b xt } (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where h t is the hidden representation of x <t from the LSTM, v y and v c are the embeddings for the label and the latent variable, respectively, [u; v] denotes vertical concatenation, u xt is the output word embedding, and b xt is a bias parameter. The prior distribution of the latent variable is parameterized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "p \u03a6 (c) \u221d exp{w c v c + b c } (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where \u03a6 is the set of parameters for this distribution which includes the vector w c and scalar b c for each c.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As in the standard generative model, the label prior p \u03a8 (y) is acquired from the empirical label distribution in the training data and remains fixed during training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Training. As is standard in latent-variable modeling, we train our models by maximizing the log marginal likelihood:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "max \u0398,\u03a6,V C ,V Y x,y \u2208D log c\u2208C p \u0398 (x | c, y)p \u03a6 (c)p \u03a8 (y) (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In NLP, these sorts of optimization problems are traditionally solved with the EM algorithm. However, we instead directly optimize the above quantity using automatic differentiation. This is natural because we use softmax-transformed parameterizations; a more traditional parameterization would assign parameters directly to individual probabilities, which then requires constrained optimization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Inference. The prediction is made by marginalizing out the latent variables as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "y = argmax y\u2208Y c\u2208C p \u0398 (x | c, y)p \u03a6 (c)p \u03a8 (y) (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We experimented with other inference objectives and found similar results. More details can be found in Appendix C.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Differences with ensembles. Our latentvariable model resembles an ensemble of multiple generative classifiers, but there are two main differences. First, all parameters in the latent generative classifier are trained jointly, while a standard ensemble combines predictions from multiple, independently-trained models. Joint training leads to complementary information being captured by latent variable values, as shown in our analysis. Moreover, a standard ensemble will lead to far more parameters (10, 30, or 50 times as many in our experimental setup) since each generative classifier is a completely separate model. Our approach simply conditions on the embedding of the latent variable value and therefore does not add many parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Latent-Variable Generative Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We present our results on six publicly available text classification datasets introduced by Zhang et al. 2015, which include news categorization, sentiment analysis, question/answer topic classification, and article ontology classification. 2 To compare classifiers across training set sizes, we follow the setup of Yogatama et al. (2017) and construct multiple training sets by randomly sampling 5, 20, 100, 1k, 2k, 5k, and 10k instances per label from each dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 338, |
|
"text": "Yogatama et al. (2017)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In all experiments, the word embedding dimension and the LSTM hidden state dimension are set to 510 100. All LSTMs use one layer and are unidirectional. The label dimensionality of all generative classifiers is set to 100. We adopt the same parameter settings as Yogatama et al. (2017) to ensure the results are comparable. For the latent-variable generative classifiers, we choose 10 or 30 latent variable values with embeddings of dimensionality 10, 50, or 100. For optimization, we use Adam (Kingma and Ba, 2015) with learning rate 0.001. We do early stopping by evaluating the classification accuracy on the development set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 285, |
|
"text": "Yogatama et al. (2017)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Due to memory limitations and computational costs, we truncate the length of the input sequences to 80 tokens before adding <s> and </s> to indicate the start and end of the document. Though truncation decreases the performance of the models, all models use the same truncated inputs, so the comparison is still fair. 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To confirm we have built strong baselines, we first compare our implementation of the generative and discriminative classifiers to prior work. Our results in Appendix A show that our baselines are comparable to those of Yogatama et al. (2017) . Figure 2 shows results for the discriminative, generative, and latent generative classifiers in terms of data efficiency. Data efficiency is measured by comparing the accuracies of the classifiers when trained across varying sizes of training sets. Numerical comparisons on two datasets are shown in Table 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 242, |
|
"text": "Yogatama et al. (2017)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 253, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 552, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "With small training sets, the latent generative classifier consistently outperforms both the generative and discriminative classifiers. When the generative classifier is better than the discriminative one, as in DBpedia, the latent classifier resembles the generative classifier. When the discriminative classifier is better, as in Yelp Polarity, the latent classifier patterns after the discriminative classifier. However, when the number of training examples is in the range of approximately 5,000 to 10,000 per class, the discriminative classifier tends to perform best.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "With small training sets, the generative classifier outperforms the discriminative one in most cases except the very smallest training sets. For example, in the Yelp Review Polarity dataset, the first two points are from classifiers trained with only 10 and 40 instances in total. The other case in which generative classifiers underperform is when training over large training sets, which agrees with the theoretical and empirical findings in prior work (Yogatama et al., 2017; Ng and Jordan, 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 455, |
|
"end": 478, |
|
"text": "(Yogatama et al., 2017;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 499, |
|
"text": "Ng and Jordan, 2002)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "There are multiple choices to factorize the joint probability of the variables x, y, and c, which correspond to different graphical models. Here we consider other graphical model structures, namely those shown in Figure 3 . We refer to the model in Figure 3 (b) as the \"joint\" latent generative classifier since it uses the latent variable to jointly generate x and y. We refer to the model in Figure 3 (c) as the \"middle\" latent generative classifier as the latent variable separates the textual input from the label. We use similar parameterizations for these models as for the auxiliary latent classifier, with conditional language models to generate x where the embedding of the latent variable is concatenated to the hidden state as in Section 3. Figure 4 shows the comparison of the standard and the three latent generative classifiers on Yelp Review Polarity, AGNews, and DBpedia. 4 We observe that the auxiliary model consistently performs best, while the other two latent generative classifiers do not consistently improve over the standard generative classifier. On DBpedia, we see surprisingly poor performance when adding latent variables suboptimally. This suggests that the choice of where to include latent variables has a significant impact on performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 888, |
|
"end": 889, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 221, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 249, |
|
"end": 257, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 402, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 760, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of Graphical Model Structure", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Dependency between label and input variable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Graphical Model Structure", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We observe that the most prominent difference between the auxiliary and the other two latentvariable models is that the label variable y is directly linked to the input variable x in the auxiliary model, which is also the case in the standard generative model. In order to verify the importance of this direct dependency between the label and input, we create a new latent-variable model by adding a directed edge between y and x to the middle latent generative model. We refer to this model as the \"hierarchical\" latent generative classifier, which is shown in Figure 3(d) . The results in Table 2 show the performance gains after adding this edge, which are all positive and sometimes very large. The resulting hierarchical model is very close in performance to the auxiliary model, which is unsurprising because these two models differ only in the presence of the edge from y to c.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 562, |
|
"end": 573, |
|
"text": "Figure 3(d)", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 598, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of Graphical Model Structure", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We conduct a comparison to demonstrate that the performance gains are due to the latent-variable structure instead of an increased number of parameters when adding the latent variables. 5 For the latent generative classifier, we choose 10 latent variable values with embeddings of di- Table 3 : Accuracy comparison of standard generative (Gen.) and latent (Lat.) classifiers under earlier experimental configurations and parameter-comparison configurations (PC). When controlling for the number of parameters, the latent classifier still outperforms the standard generative classifier, which indicates the performance gains are due to the latent variables instead of an increased number of parameters. mensionality 10, and a label dimensionality of 100 (Lat. PC in Table 3 ). For the standard generative classifier, we choose a label dimensionality of 110 (Gen. PC in Table 3 ). So, the numbers of parameters are comparable, since we ensure the same number of parameters in the \"output\" word embeddings in the softmax layer of the language model, which is the decision that most strongly affects the number of parameters. Table 3 shows the results with different configurations, including the choices mentioned above as well as the results from earlier configurations mentioned in the paper. We observe that the latent generative classifiers still perform better in terms of data efficiency, which shows that the latentvariable structure accounts for the performance gains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 187, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 292, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 772, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 875, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1122, |
|
"end": 1129, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of Latent Variables", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The results reported before are evaluated on the classifiers trained by directly maximizing the log marginal likelihood via gradient-based optimiza- tion. In addition, we train our latent generative classifiers with the EM algorithm (Salakhutdinov et al., 2003) . More training details can be found in Appendix B.", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 261, |
|
"text": "(Salakhutdinov et al., 2003)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning via Expectation-Maximization", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "To speed convergence, we use a mini-batch version of EM, updating the parameters after each mini-batch. Our results in Table 4 show that the direct approach and the EM algorithm have similar performance in terms of classification accuracy and convergence speed in optimizing the parameters of our latent models. Similar trends appear for the other datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 126, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning via Expectation-Maximization", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We take the strongest latent-variable model, the auxiliary latent generative classifier, and analyze the relationship among the latent, input, and label variables. We use the AGNews dataset, which contains 4 categories: world, sports, business, and sci/tech. The classifier we analyze has 10 values for the latent variable and is trained on a training set containing 1k instances per class.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation of Latent Variables", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We first investigate the relationship between the latent variable and the label by counting cooccurrences. For each instance in the development set, we calculate the posterior probability distribution over the latent variable, and pick the value with the highest probability as the preferred latent variable value for that instance. This is reasonable since in our trained model, the posterior distribution over latent variable values is peaked. Then we categorize the data by their preferred latent variable values and count the gold standard labels in each group. We observe that the labels are nearly uniformly distributed in each latent variable value, suggesting that the latent variables are not obviously being used to encode information about the label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation of Latent Variables", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Thus, we hypothesize there should be information other than that pertaining to the label that causes the data to cluster into different latent variable values. We study the differences of the input texts among the 10 clusters by counting frequent words, manually scanning through instances, and looking for high-level similarities and differences. We report our manual labeling for the latent variable values in Table 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 419, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Interpretation of Latent Variables", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "For example, value 1 is mostly associated with future and progressive tenses; the words \"will\", \"next\", and \"new\" appear frequently. Value 2 tends to contain past and perfect verb tenses (the phrases \"has been\" and \"have been\" appear frequently). Value 3 contains region names like \"VANCOUVER\", \"LONDON\", and \"New Brunswick\", while value 7 contains countryoriented terms like \"Indian\", \"Russian\", \"North Korea\", and \"Ireland\". Our choice of only 10 latent variable values causes them to capture the coarse-grained patterns we observe here. It is possible that more fine-grained differences would appear with a larger number of values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation of Latent Variables", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Another advantage of generative models is that they can be used to generate data in order to better understand what they have learned, especially in seeking to understand latent variables. We use our auxiliary latent generative classifier to generate multiple samples by setting the latent variable and the label. Instead of the soft mixture of discrete latent variable values that is used in classification (since we marginalize over the latent variable at test time), here we choose a single latent variable value when generating a textual sample.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation with Latent Variables", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "To increase generation diversity, we use temperature-based sampling when choosing the next word, where higher temperature leads to higher variety and more noise. We set the temperature to 0.6. Note that the latent-variable model here is trained on only 4000 instances (1k for each label) from AGNews, so the generated samples do suffer from the small size of data used in training the language model. Table 6 shows some generated examples. We observe that different combinations of the latent variable and label lead to generations that comport with both the labels and our interpretations of the latent variable values.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 401, |
|
"end": 408, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generation with Latent Variables", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We speculate that the reason our generative classifiers perform well in the data-efficient setting is that they are better able to understand the data via language modeling rather than directly optimizing the classification objective. Our generated samples testify to the ability of generative classifiers to model the underlying data distribution even with only 4000 instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation with Latent Variables", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Supervised Generative Models. Generative models have traditionally been used in supervised settings for many NLP tasks, including naive Bayes and other models for text classification (Maron, 1961; Yogatama et al., 2017) , Markov models for sequence labeling (Church, 1988; Bikel et al., 1999; Brants, 2000; Zhou and Su, 2002) , and probabilistic models for parsing (Magerrnan and Marcus, 1991; Black et al., 1993; Eisner, 1996; Collins, 1997; Dyer et al., 2016) . Recent work in generative models for question answering (Lewis and Fan, 2019) learns to generate questions instead of directly penalizing prediction errors, which encourages the model to better understand the input data. Our work is directly inspired by that of Yogatama et al. (2017) , who build RNN-based generative text classifiers and show scenarios where they can be empirically useful.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 196, |
|
"text": "(Maron, 1961;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 219, |
|
"text": "Yogatama et al., 2017)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 272, |
|
"text": "(Church, 1988;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 292, |
|
"text": "Bikel et al., 1999;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 306, |
|
"text": "Brants, 2000;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 325, |
|
"text": "Zhou and Su, 2002)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 393, |
|
"text": "(Magerrnan and Marcus, 1991;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 413, |
|
"text": "Black et al., 1993;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 427, |
|
"text": "Eisner, 1996;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 442, |
|
"text": "Collins, 1997;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 461, |
|
"text": "Dyer et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 541, |
|
"text": "(Lewis and Fan, 2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 748, |
|
"text": "Yogatama et al. (2017)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Text Classification. Traditionally, linear classifiers (McCallum and Nigam, 1998; Joachims, 1998; Fan et al., 2008) have been used for text classification. Recent work has scaled up text classification to larger datasets with models based on logistic regression (Joulin et al., 2017) , convolutional neural networks (Kim, 2014; Zhang et al., 2015; Conneau et al., 2017) , and recurrent neural networks (Xiao and Cho, 2016; Yogatama et Table 5 : Latent variable values (\"id\"), our manually-defined descriptions, and examples of instances associated to them. Boldface is used to highlight cues to our labeling. We use the term \"mixture\" when we did not find clear signals to interpret the latent variable value.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 81, |
|
"text": "(McCallum and Nigam, 1998;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 97, |
|
"text": "Joachims, 1998;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 98, |
|
"end": 115, |
|
"text": "Fan et al., 2008)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 283, |
|
"text": "(Joulin et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 327, |
|
"text": "(Kim, 2014;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 347, |
|
"text": "Zhang et al., 2015;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 369, |
|
"text": "Conneau et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 422, |
|
"text": "(Xiao and Cho, 2016;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 434, |
|
"text": "Yogatama et", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 435, |
|
"end": 442, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Latent variable id = 3: region names, locations world BEIJING ( Reuters ) -Oklahoma supporters unemployment claims that he plans to trying to restore access next season 's truce by ruling , saying a major parliament . sport", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The Dallas Cowboys today continued advantage today with Miami and the Hurricanes had to get the big rotation for the first time this year . business Las Vegas took one more high-stepping kick across the pond as casino operator Caesars Entertainment Inc . sci/tech SAN FRANCISCO -Sun Microsystems on Monday will surely offer the deal to sell up pioneer members into two years and archiving . Latent variable id = 6: numbers, money-related world An Israeli helicopter gunship fired a missile among $ 5 million in to Prime Minister Ariel Sharon on the streets of U.S. warming may not be short-lived . sport On Wednesday it would win the disgruntled one of the season opener in a # 36 ; 8.75 billion of World Cup final day for second . business Reuters -U.S. drug company Biogen Idec is considering an all-share bid of more than 8.5 billion euros ( # 36 ; 10.6 billion ) for Irish peer Elan , a newspaper reported on Sunday . sci/tech The JVC Everio GZ-MC100 ( $ 1199.95 ) and GZ-MC200 ( $ 1299.95 ) will use 4GB Microdrive cards , which are removable hard drives measuring 1.5 inches square , but will also lost vital \" fans to recently over . Latent variable id = 10: symbols, links world A German court is set to hear all its secular oil , and western Kerik in Table 6 : Generated examples by controlling the latent variables and labels (world, sport, business, sci/tech) with our latent classifier trained on a small subset of the AGNews dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1260, |
|
"end": 1267, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "2017), the latter of which is most closely-related to our models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Latent-variable Models. Latent variables have been widely used in both generative and discriminative models to learn rich structure from data Klein, 2007, 2008; Blunsom et al., 2008; Yu and Joachims, 2009; Morency et al., 2008) . Recent work in neural networks has shown that introducing latent variables leads to higher representational capacity (Kingma and Welling, 2014; Chung et al., 2015; Burda et al., 2016; Ji et al., 2016) . However, unlike variational autoencoders (Kingma and Ba, 2015) and related work that use continuous latent variables, our model is more similar to recent efforts that combine neural architectures with discrete latent variables and end-to-end training (Ji et al., 2016; Kim et al., 2017; Kong et al., 2017; Chen and Gimpel, 2018; Wiseman et al., 2018, inter alia) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 160, |
|
"text": "Klein, 2007, 2008;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 182, |
|
"text": "Blunsom et al., 2008;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 205, |
|
"text": "Yu and Joachims, 2009;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 227, |
|
"text": "Morency et al., 2008)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 373, |
|
"text": "(Kingma and Welling, 2014;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 393, |
|
"text": "Chung et al., 2015;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 413, |
|
"text": "Burda et al., 2016;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 430, |
|
"text": "Ji et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 684, |
|
"end": 701, |
|
"text": "(Ji et al., 2016;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 719, |
|
"text": "Kim et al., 2017;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 720, |
|
"end": 738, |
|
"text": "Kong et al., 2017;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 761, |
|
"text": "Chen and Gimpel, 2018;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 795, |
|
"text": "Wiseman et al., 2018, inter alia)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "An alternative solution to the small-data setting is to use language representations pretrained on large, unannotated datasets (Mikolov et al., 2013; Pennington et al., 2014; Devlin et al., 2019) . In other experiments not reported here, we found that using pretrained word embeddings leads to larger performance improvements for the discriminative classifiers than the generative ones.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 149, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 174, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 195, |
|
"text": "Devlin et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Future work will explore the performance of latent generative classifiers in other challenging experimental conditions, including testing robustness to data shift and adversarial examples as well as zero-shot learning. Another thread of future work is to explore the performance of discriminative models with latent variables, and investigate combining pretrained representations with both generative and discriminative classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We focused in this paper on improving the data efficiency of generative text classifiers by introducing discrete latent variables into the generative story. Our experimental results demonstrate that, with small annotated training data, latent generative classifiers have larger and more stable performance gains over discriminative classifiers than their standard generative counterparts. Analysis reveals interpretable latent variable values and generated samples, even with very small training sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "A more detailed dataset description is in Appendix E.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In other experiments, we compared performance with different truncation limits across training set sizes, finding the trends to be consistent with those presented here.5 Results5.1 Data Efficiency", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similar trends are observed for all datasets, so we only show three for brevity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results in the preceding sections use the models with configurations tuned on the development sets. We follow the practice ofYogatama et al. (2017) and fix label dimensionality to 100, as described in Section 4.2. The only tuned hyperparameters are the number of latent variable values and the dimensions of their embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Lingyu Gao, Qingming Tang, and Lifu Tu for helpful discussions, Michael Maire and Janos Simon for their useful feedback, the anonymous reviewers for their comments that improved this paper, and Google for a faculty research award to K. Gimpel that partially supported this research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "An algorithm that learns what's in a name", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine learning", |
|
"volume": "34", |
|
"issue": "1-3", |
|
"pages": "211--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what's in a name. Machine learning, 34(1-3):211-231.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Towards history-based grammars: Using richer models for probabilistic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ezra", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafrerty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Magerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "31st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--37", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/981574.981579" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ezra Black, Fred Jelinek, John Lafrerty, David M. Magerman, Robert Mercer, and Salim Roukos. 1993. Towards history-based grammars: Using richer models for probabilistic parsing. In 31st An- nual Meeting of the Association for Computational Linguistics, pages 31-37, Columbus, Ohio, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A discriminative latent variable model for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "200--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statisti- cal machine translation. In Proceedings of ACL-08: HLT, pages 200-208, Columbus, Ohio. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "TnT -a statistical partof-speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Sixth Applied Natural Language Processing Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "224--231", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/974147.974178" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Brants. 2000. TnT -a statistical part- of-speech tagger. In Sixth Applied Natural Lan- guage Processing Conference, pages 224-231, Seat- tle, Washington, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Importance weighted autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Burda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Grosse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. 2016. Importance weighted autoencoders. In Pro- ceedings of International Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Smaller text classifiers with discriminative cluster embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Mingda", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "739--745", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2116" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mingda Chen and Kevin Gimpel. 2018. Smaller text classifiers with discriminative cluster embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 2 (Short Papers), pages 739-745, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A recurrent latent variable model for sequential data", |
|
"authors": [ |
|
{ |
|
"first": "Junyoung", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Kastner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Dinh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kratarth", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Aaron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2980--2988", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In Advances in neural information processing sys- tems, pages 2980-2988.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A stochastic parts program and noun phrase parser for unrestricted text", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"Ward" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Second Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "136--143", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/974235.974260" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth Ward Church. 1988. A stochastic parts pro- gram and noun phrase parser for unrestricted text. In Second Conference on Applied Natural Language Processing, pages 136-143, Austin, Texas, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Three generative, lexicalised models for statistical parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--23", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/976909.979620" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In 35th Annual Meet- ing of the Association for Computational Linguis- tics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 16-23, Madrid, Spain. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Very deep convolutional networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1107--1116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Holger Schwenk, Lo\u00efc Barrault, and Yann Lecun. 2017. Very deep convolutional net- works for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1107-1116, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Maximum likelihood from incomplete data via the EM algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Arthur", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Dempster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Laird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Journal of the Royal Statistical Society: Series B (Methodological)", |
|
"volume": "39", |
|
"issue": "1", |
|
"pages": "1--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arthur P. Dempster, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Sta- tistical Society: Series B (Methodological), 39(1):1- 22.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "ImageNet: A large-scale hierarchical image database", |
|
"authors": [ |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "2009 IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "248--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hier- archical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Recurrent neural network grammars", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "199--209", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1024" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, Califor- nia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Three new probabilistic models for dependency parsing: An exploration", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "The 16th International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COL- ING 1996 Volume 1: The 16th International Confer- ence on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "LIBLINEAR: A library for large linear classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Rong-En Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang-Rui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of machine learning research", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1871--1874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of machine learning research, 9(Aug):1871-1874.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A latent variable recurrent neural network for discourse-driven language models", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "332--342", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1037" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangfeng Ji, Gholamreza Haffari, and Jacob Eisen- stein. 2016. A latent variable recurrent neural net- work for discourse-driven language models. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 332-342, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Text categorization with support vector machines: Learning with many relevant features", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "European conference on machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "137--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many rel- evant features. In European conference on machine learning, pages 137-142. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Bag of tricks for efficient text classification", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "427--431", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1746--1751", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1181" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Structured attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Denton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luong", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim, Carl Denton, Luong Hoang, and Alexan- der M. Rush. 2017. Structured attention net- works. In Proceedings of International Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Represen- tations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Autoencoding variational Bayes", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto- encoding variational Bayes. In Proceedings of Inter- national Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Segmental recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Lingpeng", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2017. Segmental recurrent neural networks. In Proceed- ings of International Conference on Learning Rep- resentations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Generative question answering: Learning to answer the whole question", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis and Angela Fan. 2019. Generative ques- tion answering: Learning to answer the whole ques- tion. In Proceedings of International Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Microsoft COCO: Common objects in context", |
|
"authors": [ |
|
{ |
|
"first": "Tsung-Yi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Maire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Belongie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Hays", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pietro", |
|
"middle": [], |
|
"last": "Perona", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deva", |
|
"middle": [], |
|
"last": "Ramanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C Lawrence", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "European conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--755", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In European confer- ence on computer vision, pages 740-755. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Pearl: A probabilistic chart parser", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Magerrnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Fifth Conference of the European Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Magerrnan and Mitchell P. Marcus. 1991. Pearl: A probabilistic chart parser. In Fifth Confer- ence of the European Chapter of the Association for Computational Linguistics, Berlin, Germany. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Building a large annotated corpus of English: The Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Automatic indexing: An experimental inquiry", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Maron", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1961, |
|
"venue": "J. ACM", |
|
"volume": "8", |
|
"issue": "3", |
|
"pages": "404--417", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/321075.321084" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. E. Maron. 1961. Automatic indexing: An experi- mental inquiry. J. ACM, 8(3):404-417.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A comparison of event models for naive Bayes text classification", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kamal", |
|
"middle": [], |
|
"last": "Nigam", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "AAAI-98 Workshop on Learning for Text Categorization", |
|
"volume": "752", |
|
"issue": "", |
|
"pages": "41--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew McCallum and Kamal Nigam. 1998. A com- parison of event models for naive Bayes text classi- fication. In AAAI-98 Workshop on Learning for Text Categorization, volume 752, pages 41-48.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Modeling Latent-Dynamic in Shallow Parsing: A Latent Conditional Model with Improved Inference", |
|
"authors": [ |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Okanoharay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "The 22nd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis-Philippe Morency, Xu Sun, Daisuke Okanoharay, and Jun'ichi Tsujii. 2008. Mod- eling Latent-Dynamic in Shallow Parsing: A Latent Conditional Model with Improved Inference. In The 22nd International Conference on Computational Linguistics (COLING 2008), Manchester, UK.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "841--848", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Y. Ng and Michael I. Jordan. 2002. On dis- criminative vs. generative classifiers: A compari- son of logistic regression and naive Bayes. In Ad- vances in neural information processing systems, pages 841-848.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Improved inference for unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "404--411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov and Dan Klein. 2007. Improved infer- ence for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, pages 404-411, Rochester, New York. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Discriminative log-linear grammars with latent variables", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1153--1160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov and Dan Klein. 2008. Discriminative log-linear grammars with latent variables. In Ad- vances in neural information processing systems, pages 1153-1160.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Relationship between gradient and EM steps in latent variable models", |
|
"authors": [ |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Roweis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruslan Salakhutdinov, Sam Roweis, and Zoubin Ghahramani. 2003. Relationship between gradient and EM steps in latent variable models.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Learning neural templates for text generation", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3174--3187", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1356" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text genera- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3174-3187, Brussels, Belgium. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Efficient character-level document classification by combining convolution and recurrent layers", |
|
"authors": [ |
|
{ |
|
"first": "Yijun", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1602.00367" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combin- ing convolution and recurrent layers. arXiv preprint arXiv:1602.00367.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Generative and discriminative text classification with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1703.01898" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dani Yogatama, Chris Dyer, Wang Ling, and Phil Blun- som. 2017. Generative and discriminative text clas- sification with recurrent neural networks. arXiv preprint arXiv:1703.01898.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Learning structural SVMs with latent variables", |
|
"authors": [ |
|
{ |
|
"first": "Chun-Nam John", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ICML", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chun-Nam John Yu and Thorsten Joachims. 2009. Learning structural SVMs with latent variables. In ICML, volume 2, page 5.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Character-level convolutional networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "649--657", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Proceedings of the 28th International Conference on Neural Information Processing Sys- tems -Volume 1, NIPS'15, pages 649-657, Cam- bridge, MA, USA. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Named entity recognition using an HMM-based chunk tagger", |
|
"authors": [ |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "473--480", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073163" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "GuoDong Zhou and Jian Su. 2002. Named entity recognition using an HMM-based chunk tagger. In Proceedings of the 40th Annual Meeting of the As- sociation for Computational Linguistics, pages 473- 480, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Speech database development at MIT: TIMIT and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Victor Zue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Seneff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Speech communication", |
|
"volume": "9", |
|
"issue": "4", |
|
"pages": "351--356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Zue, Stephanie Seneff, and James Glass. 1990. Speech database development at MIT: TIMIT and beyond. Speech communication, 9(4):351-356.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Graphical models of (a) standard generative classifier and (b) auxiliary latent generative classifier." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Comparison of classification accuracy of the discriminative (Disc.), standard generative (Gen.), and latent generative (Lat.) classifiers training across training set sizes." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Graphical models of (a) auxiliary, (b) joint, (c) middle, and (d) hierarchical latent generative classifiers." |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Fallujah . & lt ; A HREF = \" http : / / www.investor.reuters.com / FullQuote.aspx ? ticker = Agency target = Army ... sport White Sox to an overpowering 49-0 victory over The world championship game . & lt ; br & gt ; & lt ; br & gt ; Comcast SportsNet business NEW YORK ( Reuters ) -U.S. stocks climbed on Monday , with a steep decline in commodity prices and lower crude oil dented shares of Alcoa Inc . & lt ; A HREF = \" http : / / www.investor.reuters.com / FullQuote.aspx ? ticker = GDT.N target = / stocks / quickinfo / fullquote \" & gt ; & lt ; / A & gt ;. sci/tech Spyware problems introduced a radio frequency code Thursday . & lt ; FONT face = \" verdana , MS Sans Serif , arial , helvetica \" size = \" -2 \" color = \" # 666666 \" & gt ; & lt ; B & gt ; -washingtonpost.com & lt ; / B & gt ; & lt" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">\u2206(Lat., Gen.)</td><td colspan=\"2\">\u2206(Lat., Disc.)</td></tr><tr><td/><td colspan=\"4\">AGNews DBpedia AGNews DBpedia</td></tr><tr><td>5</td><td>+12.3</td><td>+3.3</td><td>+7.2</td><td>+34.0</td></tr><tr><td>20</td><td>+23.5</td><td>+3.3</td><td>+17.7</td><td>+41.8</td></tr><tr><td>100</td><td>+9.8</td><td>+1.8</td><td>+16.0</td><td>+17.5</td></tr><tr><td>1k</td><td>+2.0</td><td>+0.9</td><td>+8.0</td><td>+0.0</td></tr><tr><td>all</td><td>+0.1</td><td>-0.4</td><td>+0.3</td><td>-2.4</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "Table 1: \u2206(Lat., Gen.): change in accuracy when moving from generative to latent generative classifier; \u2206(Lat., Disc.): change in accuracy when moving from discriminative to latent generative classifier. The first column shows the number of training instances per class." |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Changes in accuracy when adding a directed edge from the label to the input, i.e., the improvement in accuracy when moving from the middle to the hierarchical latent generative classifier. Each column shows a different number of training instances per class." |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Comparison of the classification accuracy and convergence speed of the classifiers trained with direct optimization (Direct) of the log marginal likelihood and the EM algorithm (EM). The numbers inside the parentheses are the numbers of epochs required to reach the classification accuracies listed outside the parentheses." |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>id</td><td>description</td><td>examples</td></tr><tr><td colspan=\"3\">1 ... 4 future/progressive tenses mixture</td></tr><tr><td>5</td><td>abbreviations</td><td>St.</td></tr><tr><td/><td/><td>al.,</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "Commission is likely to follow opinion in the U.S. on the merger suit ... ... to increase computer software exports is beginning to show results ...2past/perfect tense A screensaver targeting spam-related websites appears to have been too successful . Universal has signed a handful of artists to a digital-only record label . ..3 region names, locationsNewcastle manager Bobby Robson ... relieved of his duties ... Newcastle announced ... ABUJA ... its militias in Darfur before they would sign ... Louis advanced to the N.L. championship series for the third time in five years ... UAL ( UALAQ.OB : OTC BB -news -research ) ... ( UAIRQ.OB : OTC BB ... 6 numbers, money-related ...challenge larger rivals in the fast-growing 2.1 billion-a-year sleep aid market .... ... to a $ 25,000 prize , and more importantly , into the history books ... 7 dates ... an Egyptian diplomat said on Friday, and the abduction of ... earlier this month . ... expected Monday or Tuesday , ... doctors and nurses off for the holiday weekend ... 8 country-oriented terms Rwandan President ... in the Democratic Republic of the Congo after ... Pope John Paul II issued a new appeal for peace in Iraq and the Middle East ... 9 mixure 10 symbols, links ... A HREF = \" http : / / www.reuters.co.uk / financeQuoteLookup.jhtml ... & lt ; strong & gt ; Analysis & lt ; / strong & gt ; Contracting out the blame ..." |
|
} |
|
} |
|
} |
|
} |