{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:59.872444Z" }, "title": "Challenging the Semi-Supervised VAE Framework for Text Classification", "authors": [ { "first": "Ghazi", "middle": [], "last": "Felhi", "suffix": "", "affiliation": { "laboratory": "UMR 7030", "institution": "LIPN Universit\u00e9 Sorbonne Paris Nord -CNRS", "location": { "postCode": "F-93430", "settlement": "Villetaneuse", "country": "France" } }, "email": "felhi@lipn.fr" }, { "first": "Joseph", "middle": [], "last": "Le Roux", "suffix": "", "affiliation": { "laboratory": "UMR 7030", "institution": "LIPN Universit\u00e9 Sorbonne Paris Nord -CNRS", "location": { "postCode": "F-93430", "settlement": "Villetaneuse", "country": "France" } }, "email": "leroux@lipn.fr" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "", "affiliation": { "laboratory": "", "institution": "INRIA", "location": { "settlement": "Paris Paris", "country": "France" } }, "email": "djame.seddah@inria.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Semi-Supervised Variational Autoencoders (SSVAEs) are widely used models for data efficient learning. In this paper, we question the adequacy of the standard design of sequence SSVAEs for the task of text classification as we exhibit two sources of overcomplexity for which we provide simplifications. These simplifications to SSVAEs preserve their theoretical soundness while providing a number of practical advantages in the semisupervised setup where the result of training is a text classifier. These simplifications are the removal of (i) the Kullback-Liebler divergence from its objective and (ii) the fully unobserved latent variable from its probabilistic model. These changes relieve users from choosing a prior for their latent variables, make the model smaller and faster, and allow for a better flow of information into the latent variables. We compare the simplified versions to standard SSVAEs on 4 text classification tasks. On top of the above-mentioned simplification, experiments show a speed-up of 26%, while keeping equivalent classification scores. The code to reproduce our experiments is public 1 .", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Semi-Supervised Variational Autoencoders (SSVAEs) are widely used models for data efficient learning. In this paper, we question the adequacy of the standard design of sequence SSVAEs for the task of text classification as we exhibit two sources of overcomplexity for which we provide simplifications. These simplifications to SSVAEs preserve their theoretical soundness while providing a number of practical advantages in the semisupervised setup where the result of training is a text classifier. These simplifications are the removal of (i) the Kullback-Liebler divergence from its objective and (ii) the fully unobserved latent variable from its probabilistic model. These changes relieve users from choosing a prior for their latent variables, make the model smaller and faster, and allow for a better flow of information into the latent variables. We compare the simplified versions to standard SSVAEs on 4 text classification tasks. On top of the above-mentioned simplification, experiments show a speed-up of 26%, while keeping equivalent classification scores. The code to reproduce our experiments is public 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Obtaining labeled data to train NLP systems is a process that has often proven to be costly and time-consuming, and this is still largely the case (Mart\u00ednez Alonso et al., 2016; Seddah et al., 2020) . Consequently, semi-supervised approaches are appealing to improve performance while alleviating dependence on annotations. To that end, Variational Autoencoders (VAEs) (Kingma and Welling, 2014) have been adapted to semi-supervised learning , and subsequently applied to several NLP tasks (Chen et al., 2018a; Corro and Titov, 2019; Gururangan et al., 2020) .", "cite_spans": [ { "start": 147, "end": 177, "text": "(Mart\u00ednez Alonso et al., 2016;", "ref_id": "BIBREF17" }, { "start": 178, "end": 198, "text": "Seddah et al., 2020)", "ref_id": "BIBREF19" }, { "start": 490, "end": 510, "text": "(Chen et al., 2018a;", "ref_id": "BIBREF3" }, { "start": 511, "end": 533, "text": "Corro and Titov, 2019;", "ref_id": "BIBREF5" }, { "start": 534, "end": 558, "text": "Gururangan et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A notable difference between the generative model case from where VAEs originate, and the 1 https://github.com/ghazi-f/Challenging-SSVAEs semi-supervised case is that only the decoder (generator) of the VAE is kept after training in the first case, while in the second, it is the encoder (classifier) that we keep. This difference, as well as the autoregressive nature of text generators has not sufficiently been taken into account in the adaptation of VAEs to semi-supervised text classification. In this work, we show that some components can be ablated from the long used semi-supervised VAEs (SSVAEs) when only aiming for text classification. These ablations simplify SSVAEs and offer several practical advantages while preserving their performance and theoretical soundness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The usage of unlabeled data through SSVAEs is often described as a regularization on representations (Chen et al., 2018a; Wolf-Sonkin et al., 2018; Yacoby et al., 2020) . More specifically, SSVAEs add to the supervised learning signal, a conditional generation learning signal that is used to train on unlabeled samples. From this observation, we study two changes to the standard SSVAE framework. The first simplification we study is the removal of a term from the objective of SSVAEs: the Kullback-Leibler term. This encourages the flow of information into latent variables, frees the users from choosing priors for their latent variables, and is harmless to the theoretical soundness of the semisupervised framework. The second simplification we study is made to account for the autoregressive nature of text generators. In the general case, input samples in SSVAEs are described with two latent variables: a partially-observed latent variable, which is also used to infer the label for the supervised learning task, and an unobserved latent variable, which describes the rest of the variability in the data. However, autoregressive text generators are powerful enough to converge without the need for latent variables. Therefore, removing the unobserved latent variable is the second change we study in SSVAEs. The above modifications can be found in some rare works throughout the literature, e.g. (Corro and Titov, 2019) . We, however, aim to provide justification for these changes beyond the empirical gains that they exhibit for some tasks.", "cite_spans": [ { "start": 101, "end": 121, "text": "(Chen et al., 2018a;", "ref_id": "BIBREF3" }, { "start": 122, "end": 147, "text": "Wolf-Sonkin et al., 2018;", "ref_id": "BIBREF20" }, { "start": 148, "end": 168, "text": "Yacoby et al., 2020)", "ref_id": "BIBREF22" }, { "start": 1403, "end": 1426, "text": "(Corro and Titov, 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our experiments on four text classification datasets show no harm to the empirical classification performance of SSVAE in applying the simplifications above. Additionally, we show that removing the unobserved latent variable leads to a significant speed-up.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To summarize our contribution, we justify two simplifications to the standard SSVAE framework, explain the practical advantage of applying these modifications, and provide empirical results showing that they speed up the training process while causing no harm to the classification performance. VAEs also include an approximate posterior (also called the encoder) q \u03c6 (z|x). Both are used during training to maximize an objective called the Evidence Lower Bound (ELBo), a lower-bound of the log-likelihood:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log p \u03b8 (x) \u2265 E z\u223cq \u03c6 (z|x) [log p \u03b8 (x|z)] \u2212 KL [q \u03c6 (z|x); p \u03b8 (z)] = ELBo(x; z)", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "Throughout the paper, we will continue to use this ELBo(.; .) operator, with the observed variable(s) as a first argument, and the latent variable(s) as a second argument. In the original VAE framework, after training, the encoder q \u03c6 is discarded and only the generative model (the prior and the decoder) are kept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The idea of using the VAE encoder as a classifier for semi-supervised learning has first been explored in . Besides the usual unobserved latent variable z, the semi-supervised VAE framework also uses a partially-observed latent variable y. The encoder q \u03c6 (y|x) serves both as the inference module for the supervised task, and as an approximate posterior (and encoder) for the y variable in the VAE framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "Consider a set of labeled examples L = {(x 1 , y 1 ), ..., (x |L| , y |L| )}, and a set of unlabeled examples U = {x 1 , ..., x |U | }. For the set L, q \u03c6 (y|x) is trained i) with the usual supervised objective (typically, a cross-entropy objective for a classification task) ii) with an ELBo that considers x and y to be observed, and z to be a latent variable. A weight \u03b1 is used on the supervised objective to control its balance with ELBo. For the set U , q \u03c6 (y|x) is only trained as part of the VAE model with an ELBO where y is used, this time, as a latent variable like z. Formally, the training objective J \u03b1 of a SSVAE is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J \u03b1 = (x,y)\u2208L ELBo((x, y); z) + \u03b1 log q \u03c6 (y|x) + x\u2208U ELBo(x; (y, z))", "eq_num": "(2)" } ], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "3 Simplifying SSVAEs for Text Classification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "The simplifications we propose stem from the analysis of an alternative form under which ELBO can be written (Eq. 2.8 in Kingma and Welling, 2019). Although it is valid for any arguments of ELBo(.; .), we display it here for an observed variable x, and the couple of latent variables (y, z):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "ELBo(x; (y, z)) = log p \u03b8 (x) \u2212 KL[q \u03c6 (y, z|x)||p \u03b8 (y, z|x)] (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "For the case of SSVAEs, this form provides a clear reading of the additional effect of ELBo on the learning process: i) maximizing the log-likelihood of the generative model p \u03b8 (x), ii) bringing the parameters of the inference model q \u03c6 (y, z|x) closer to the posterior of the generative model p \u03b8 (y, z|x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "Since p \u03b8 (y, z|x) is the distribution of the latent variables expected by the generative model p \u03b8 for it to be able to generate x, we can conclude that ELBo trains both latent variables for conditional generation on the unsupervised dataset U .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised VAEs", "sec_num": "2.2" }, { "text": "Building on observations from equation 3, we question the usefulness of training both latent variables for conditional generation when semi-supervised learning only aims for an improvement on the inference of the partially-observed latent variable y. For the case of language generation, the sequence of discrete symbols in each sample is often modeled by an autoregressive distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dropping the Unobserved Latent Variable", "sec_num": "3.1" }, { "text": "p \u03b8 (x|y, z) = i p \u03b8 (x i |y, z, x