{ "paper_id": "D19-1048", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:59:36.800119Z" }, "title": "Latent-Variable Generative Models for Data-Efficient Text Classification", "authors": [ { "first": "Xiaoan", "middle": [], "last": "Ding", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chicago", "location": { "postCode": "60637", "settlement": "Chicago", "region": "IL", "country": "USA" } }, "email": "xiaoanding@uchicago.edu" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toyota Technological Institute at Chicago", "location": { "postCode": "60637", "settlement": "Chicago", "region": "IL", "country": "USA" } }, "email": "kgimpel@ttic.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Generative classifiers offer potential advantages over their discriminative counterparts, namely in the areas of data efficiency, robustness to data shift and adversarial examples, and zero-shot learning (Ng and Jordan, 2002; Yogatama et al., 2017; Lewis and Fan, 2019). In this paper, we improve generative text classifiers by introducing discrete latent variables into the generative story, and explore several graphical model configurations. We parameterize the distributions using standard neural architectures used in conditional language modeling and perform learning by directly maximizing the log marginal likelihood via gradient-based optimization, which avoids the need to do expectation-maximization. We empirically characterize the performance of our models on six text classification datasets. The choice of where to include the latent variable has a significant impact on performance, with the strongest results obtained when using the latent variable as an auxiliary conditioning variable in the generation of the textual input. This model consistently outperforms both the generative and discriminative classifiers in small-data settings. We analyze our model by using it for controlled generation, finding that the latent variable captures interpretable properties of the data, even with very small training sets.", "pdf_parse": { "paper_id": "D19-1048", "_pdf_hash": "", "abstract": [ { "text": "Generative classifiers offer potential advantages over their discriminative counterparts, namely in the areas of data efficiency, robustness to data shift and adversarial examples, and zero-shot learning (Ng and Jordan, 2002; Yogatama et al., 2017; Lewis and Fan, 2019). In this paper, we improve generative text classifiers by introducing discrete latent variables into the generative story, and explore several graphical model configurations. We parameterize the distributions using standard neural architectures used in conditional language modeling and perform learning by directly maximizing the log marginal likelihood via gradient-based optimization, which avoids the need to do expectation-maximization. We empirically characterize the performance of our models on six text classification datasets. The choice of where to include the latent variable has a significant impact on performance, with the strongest results obtained when using the latent variable as an auxiliary conditioning variable in the generation of the textual input. This model consistently outperforms both the generative and discriminative classifiers in small-data settings. We analyze our model by using it for controlled generation, finding that the latent variable captures interpretable properties of the data, even with very small training sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The most widely-used neural network classifiers are discriminative, that is, they are trained to explicitly favor the gold standard label over others. The alternative is to design classifiers that are generative; these follow a generative story that includes predicting the label and then the data conditioned on the label. Discriminative classifiers are preferred because they generally outperform their generative counterparts on standard benchmarks. These benchmarks typically assume large annotated training sets, little mismatch between training and test distributions, relatively clean data, and a lack of adversarial examples (Zue et al., 1990; Marcus et al., 1993; Deng et al., 2009; Lin et al., 2014) .", "cite_spans": [ { "start": 633, "end": 651, "text": "(Zue et al., 1990;", "ref_id": "BIBREF44" }, { "start": 652, "end": 672, "text": "Marcus et al., 1993;", "ref_id": "BIBREF28" }, { "start": 673, "end": 691, "text": "Deng et al., 2009;", "ref_id": "BIBREF11" }, { "start": 692, "end": 709, "text": "Lin et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, when conditions are not ideal for discriminative classifiers, generative classifiers can actually perform better. Ng and Jordan (2002) showed theoretically that linear generative classifiers approach their asymptotic error rates more rapidly than discriminative ones. Based on this finding, Yogatama et al. (2017) empirically characterized the performance of RNN-based generative classifiers, showing advantages in sample complexity, zero-shot learning, and continual learning. Recent work in generative question answering models (Lewis and Fan, 2019) demonstrates better robustness to biased training data and adversarial testing data than state-of-the-art discriminative models.", "cite_spans": [ { "start": 123, "end": 143, "text": "Ng and Jordan (2002)", "ref_id": "BIBREF33" }, { "start": 300, "end": 322, "text": "Yogatama et al. (2017)", "ref_id": "BIBREF40" }, { "start": 539, "end": 560, "text": "(Lewis and Fan, 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on settings with small amounts of annotated data and improve generative text classifiers by introducing discrete latent variables into the generative story. Accordingly, the training objective is changed to log marginal likelihood of the data as we marginalize out the latent variables during learning. We parameterize the distributions with standard neural architectures used in conditional language models and include the latent variable by concatenating its embedding to the RNN hidden state before computing the softmax over words. While traditional latent variable learning in NLP uses the expectationmaximization (EM) algorithm (Dempster et al., 1977) , we instead simply perform direct optimization of the log marginal likelihood using gradientbased methods. At inference time, we similarly marginalize out the latent variables while maximizing over the label.", "cite_spans": [ { "start": 658, "end": 681, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We characterize the performance of our latentvariable generative classifiers on six text classification datasets introduced by Zhang et al. (2015) . We observe that introducing latent variables leads to large and consistent performance gains in the small-data regime, though the benefits of adding latent variables reduce as the training set becomes larger.", "cite_spans": [ { "start": 127, "end": 146, "text": "Zhang et al. (2015)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To better understand the modeling space of latent variable classifiers, we explore several graphical model configurations. Our experimental results demonstrate the importance of including a direct dependency between the label and the input in the model. We study the relationship between the label, latent, and input variables in our strongest latent generative classifier, finding that the label and latent capture complementary information about the input. Some information about the textual input is encoded in the latent variable to help with generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We analyze our latent generative model by generating samples when controlling the label and latent variables. Even with small training data, the samples capture the salient characteristics of the label space while also conforming to the values of the latent variable, some of which we find to be interpretable. While discriminative classifiers excel at separating examples according to labels, generative classifiers offer certain advantages in practical settings that benefit from a richer understanding of the data-generating process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin by defining our baseline generative and discriminative text classifiers for document classification. Our models are essentially the same as those from Yogatama et al. 2017; we describe them in detail here because our latent-variable models will extend them. 1 Our classifiers are trained on datasets D of annotated documents. Each instance x, y \u2208 D consists of a textual input x = {x 1 , x 2 , ..., x T }, where T is the length of the document, and a label y \u2208 Y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "1 The main difference between our baselines and the models in Yogatama et al. (2017) are: (1) their discriminative classifier uses an LSTM with \"peephole connections\"; (2) they evaluate a label-based generative classifier (\"Independent LSTMs\") that uses a separate LSTM for each label. They also evaluate the model we described here, which they call \"Shared LSTMs\". Their Independent and Shared LSTMs perform similarly across training set sizes.", "cite_spans": [ { "start": 62, "end": 84, "text": "Yogatama et al. (2017)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "The discriminative classifier is trained to maximize the conditional probability of labels given documents:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "x,y \u2208D log p(y | x). For our discriminative model, we encode a document x using an LSTM (Hochreiter and Schmidhuber, 1997) , and use the average of the LSTM hidden states as the document representation. The classifier is built by adding a softmax layer on top of the LSTM state average to get a probability distribution over labels.", "cite_spans": [ { "start": 88, "end": 122, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "The generative classifier is trained to maximize the joint probability of documents and labels:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "x,y \u2208D log p(x, y). The generative classifier uses the following factorization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(x, y) = p(x | y)p(y)", "eq_num": "(1)" } ], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "We parameterize log p(x | y) as a conditional LSTM language model using the standard sequential factorization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative and Generative Text Classifiers", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log p(x | y) = T t=1 log p(x t | x