{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:19.871174Z" }, "title": "Finetuning Pretrained Transformers into Variational Autoencoders", "authors": [ { "first": "Seongmin", "middle": [], "last": "Park", "suffix": "", "affiliation": { "laboratory": "", "institution": "ActionPower Seoul", "location": { "country": "Republic of Korea" } }, "email": "seongmin.park@actionpower.kr" }, { "first": "Jihwa", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "ActionPower Seoul", "location": { "country": "Republic of Korea" } }, "email": "jihwa.lee@actionpower.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder. Because posterior collapse is known to be exacerbated by expressive decoders, Transformers have seen limited adoption as components of text VAEs. Existing studies that incorporate Transformers into text VAEs (Li et al., 2020; Fang et al., 2021) mitigate posterior collapse using massive pretraining, a technique unavailable to most of the research community without extensive computing resources. We present a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning. The resulting language model is competitive with massively pretrained Transformer-based VAEs in some internal metrics while falling short on others. To facilitate training we comprehensively explore the impact of common posterior collapse alleviation techniques in the literature. We release our code for reproducability 1 .", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder. Because posterior collapse is known to be exacerbated by expressive decoders, Transformers have seen limited adoption as components of text VAEs. Existing studies that incorporate Transformers into text VAEs (Li et al., 2020; Fang et al., 2021) mitigate posterior collapse using massive pretraining, a technique unavailable to most of the research community without extensive computing resources. We present a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning. The resulting language model is competitive with massively pretrained Transformer-based VAEs in some internal metrics while falling short on others. To facilitate training we comprehensively explore the impact of common posterior collapse alleviation techniques in the literature. We release our code for reproducability 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Properly tamed latent models offer explainable and interpolatable representations of observed data. Recent works have shown such models to be especially useful in unsupervised learning settings. adapt a generative latent text model for successful unsupervised text style transfer and machine translation. achieve superior language modeling performance against common conditional counterparts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A popular variant of deep latent models is the variational autoencoder (VAE) (Kingma and Welling, 2014) . For each observed x, the model assumes the existence of a corresponding multidimensional latent vector z. Since the log evidence log p(x) is intractable for most interesting problems, the training process for VAEs opts instead to minimize the log evidence lower bound (ELBO): [log(p(x|z) )]\u2212D KL (q(z|x)||p(z)) (1) q(z|x) is a tractable, assumed posterior commonly modeled with a parametrized encoder q \u03c6 (z|x), while p(x|z) is the likelihood parametrized with a decoder p \u03b8 (x|z) that optimizes against reconstruction loss. While effective in theory, a common empirical challenge VAEs present during training is posterior collapse -a phenomenon where the decoder ignores the latent signal from z (and thus the originating input) during reconstruction. Posterior collapse can be diagnosed by checking if D KL (q(z|x)||p(z)) tends to zero during training.", "cite_spans": [ { "start": 77, "end": 103, "text": "(Kingma and Welling, 2014)", "ref_id": "BIBREF9" }, { "start": 382, "end": 393, "text": "[log(p(x|z)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "E z\u223cq(z|x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After Bowman et al. (2016) adopted VAE for text, subsequent studies have been introduced with attempts to mitigate posterior collapse in VAE language models (LMs). However, the brittle training process of VAE LMs remains an unsolved problem. present a method to utilize deep Transformer (Vaswani et al., 2017) models as components of VAE LMs. Transformer-based VAEs tap into the state-of-the-art capabilities of Transformers while retaining representational advantages of VAE LMs. The paper mitigates posterior collapse by massive pretraining and a cyclical annealing schedule (Fu et al., 2019) .", "cite_spans": [ { "start": 6, "end": 26, "text": "Bowman et al. (2016)", "ref_id": "BIBREF0" }, { "start": 287, "end": 309, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF21" }, { "start": 577, "end": 594, "text": "(Fu et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While the study presents a promising outlook for Transformer VAEs, the suggested method is not accessible to researchers who lack access to large, target-domain-specific corpora or the computing power for massive LM pretraining. Therefore, a demand arises for a way to finetune an existing Transformer model into a VAE LM with limited resources. Our research attempts to fill this gap in the literature, and makes the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a simple but reliable (as replicated across several datasets) scheme to teach latent structure to a pretrained Transformer model by just finetuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We convert a pretrained sequence-to-sequence Transformer into a VAE, instead of using two separate encoder-only (Devlin et al., 2019) and decoder-only (Radford et al., 2019) Transformers as in previous literature. This eliminates the need to maintain separate tokenizers and configurations for encoder and decoder.", "cite_spans": [ { "start": 114, "end": 135, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" }, { "start": 153, "end": 175, "text": "(Radford et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct ablation studies and extensive experiments to gauge the effectiveness of commonly used posterior collapse mitigation methods in taming Transformer VAEs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The resulting model extends existing Transformer architectures and can be initialized from pretrained non-latent model checkpoints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most VAE LMs employ recurrent neural networks (RNNs) as encoders and decoders. This is in part because enforcing a latent bottleneck layer undermines the effectiveness of encoder-decoder cross-attention in Transformers, and in significant part due to the co-occurrence of posterior collapse and powerful and deep decoder layers. overcome such training difficulties by massively increasing the number of training samples (104,213,036 sentences) for LM pretraining. and Fang et al. (2021) also finds success with Transformer VAEs for text generation. To avoid posterior collapse, Fang et al. (2021) follow the exact cyclic KL mitigation approach as that of , while introduce noise to network input.", "cite_spans": [ { "start": 468, "end": 486, "text": "Fang et al. (2021)", "ref_id": "BIBREF3" }, { "start": 578, "end": 596, "text": "Fang et al. (2021)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Transformer Text VAEs", "sec_num": "2" }, { "text": "This study identifies and explores the effect of popular posterior collapse mitigation methods in low-resource Transformer VAE training. We do not examine importance-weighted autoencoders (Burda et al., 2016) and semi-amortized autoencoders (Kim et al., 2018) to limit the scope of our experiments to unsophisticated prior distributions.", "cite_spans": [ { "start": 188, "end": 208, "text": "(Burda et al., 2016)", "ref_id": "BIBREF1" }, { "start": 241, "end": 259, "text": "(Kim et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Techniques to mitigate posterior collapse", "sec_num": "2.2" }, { "text": "Bowman et al. (2016) increases the KL term of the ELBO from zero to its full value during early stages of training, where the decoder learns to simply treat latent signal z as noise. Fu et al. (2019) extend this technique by cyclically manipulating the weight of the KL term. \u03b2-VAE (Higgins et al., 2017) and Yan et al. (2020) adopt a similar approach. We train the network without the KL term of the ELBO and retain encoder weights before jointly training the whole network. (Shen et al., 2020) Denoising text inputs by deleting random tokens motivate autoencoders (AEs) to learn better latent representations. Our study compares 0%, 15%, and 40% deletion noising schemes.", "cite_spans": [ { "start": 183, "end": 199, "text": "Fu et al. (2019)", "ref_id": "BIBREF4" }, { "start": 282, "end": 304, "text": "(Higgins et al., 2017)", "ref_id": "BIBREF6" }, { "start": 476, "end": 495, "text": "(Shen et al., 2020)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "KL Weighting / Annealing", "sec_num": "2.2.1" }, { "text": "KL-thresholding enforces a minimum \u03bb for each dimension of the KL term in the ELBO:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KL thresholding (Kingma et al., 2016)", "sec_num": "2.2.4" }, { "text": "L D KL = i max[\u03bb, D kl (q \u03c6 (z i |x)||p(z i ))] (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KL thresholding (Kingma et al., 2016)", "sec_num": "2.2.4" }, { "text": "where z i is a single dimension of z.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KL thresholding (Kingma et al., 2016)", "sec_num": "2.2.4" }, { "text": "Instead of using the last hidden state as encoder output, averaging or taking the maximum of all encoder hidden states results in a more diverse latent representation. We experiment with both meanand max-pooling schemes from the encoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder pooling (Long et al., 2019)", "sec_num": "2.2.5" }, { "text": "We extend the T5 architecture (Raffel et al., 2020) into a VAE. We modify a popular pretrained T5 model (Wolf et al., 2020) that deviates minimally from the original Transformer (Figure 1 ).", "cite_spans": [ { "start": 30, "end": 51, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF18" }, { "start": 104, "end": 123, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 178, "end": 187, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model architecture", "sec_num": "3" }, { "text": "Hidden states from all layers of T5's encoder q \u03c6 (z|x) are mean-or max-pooled into a vector h pooled \u2208 R H , where H is the encoder's hidden dimension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model architecture", "sec_num": "3" }, { "text": "Assumed prior q(z)'s mean \u00b5 and log variance \u03c3 vectors of dimension L is obtained from h pooled :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model architecture", "sec_num": "3" }, { "text": "\u00b5 = h pooled W \u00b5 , log\u03c3 = h pooled W \u03c3 (3) where W \u00b5 , W \u03c3 \u2208 R L .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model architecture", "sec_num": "3" }, { "text": "As in a standard VAE, a stochastic latent vector z is sampled using the reparameterization trick to enable back-propagation through sampling:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model architecture", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z = \u00b5 + \u03c3 , \u223c N (0, 1)", "eq_num": "(4)" } ], "section": "Model architecture", "sec_num": "3" }, { "text": "We pass z into the decoder p \u03b8 (x|z) as the only component of decoder's cross attention. Our Figure 1 : Transformer VAE architecture. A \"bottleneck\" step (W \u03c3 and W \u00b5 ) is placed between the encoder and the decoder of T5. Latent information from pooled encoder hidden states is captured in the bottleneck layer before being passed to the decoder. The network is optimized against regularization loss in the bottleneck and reconstruction loss at the decoder. method injects z into every layer of the decoder as in previous literature Fang et al., 2021) , but deviates in two important ways: first, we pass z as the sole key and value of encoder-decoder cross attention, instead of self-attention; second, we project z into the correct dimension (L \u00d7 A \u00d7 S, where L is the decoder layer count, A is the number of attention heads, and S is the embedding dimension per head) with a feed-forward network, instead of taking a copy of z to inject to each decoder layer.", "cite_spans": [ { "start": 533, "end": 551, "text": "Fang et al., 2021)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model architecture", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(K ca , V ca ) = (zW proj , zW proj )", "eq_num": "(5)" } ], "section": "Model architecture", "sec_num": "3" }, { "text": "where W proj \u2208 R L\u00d7A\u00d7S and K ca and V ca are key and value in decoder cross-attention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model architecture", "sec_num": "3" }, { "text": "During preliminary experiments, posterior collapse was observed in all training schemes without encoder warmup training. The decoder learns to ignore the initially noisy input signal from the encoder. Thus, we compose our finetuning method in two separate phases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Phase 1 -Encoder warmup: Weight of KL loss is set to zero, making our model's objective function similar to that of an AE. Different input denoising percentages, encoder pooling strategies, latent dimension sizes, and decoder freezing configurations are compared.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Phase 2 -Full finetuning: KL loss is reinstated and full VAE training are conducted. We compare different input denoising percentages, encoder pooling strategies, KL annealing schedules, and KL thresholds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We run our proposed two-phase finetuning training scheme on four standard VAE LM benchmark datasets: PTB (Marcus et al., 1993) , SNLI (Bowman et al., 2016), Yahoo (Yang et al., 2017) , and Yelp (Shen et al., 2017) .", "cite_spans": [ { "start": 105, "end": 126, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF16" }, { "start": 163, "end": 182, "text": "(Yang et al., 2017)", "ref_id": "BIBREF24" }, { "start": 194, "end": 213, "text": "(Shen et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Following and Li et al. 2019, we perform intrinsic evaluation of our proposed Transformer VAE architecture. We report perplexity (PPL), KL-divergence between model posterior and assumed posterior (KL), and negative ELBO on the test set. To assess the quality of learned latent codes, we also report mutual information (MI) (Hoffman and Johnson, 2016) and the number of active units (AU) (Burda et al., 2016) . MI measures the dependence of latent codes to encoder input. AU measures the covariance between encoder input and latent codes.", "cite_spans": [ { "start": 387, "end": 407, "text": "(Burda et al., 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Experimental hyperparameters such as specific annealing schedules and training epochs per phase are detailed in the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We find that freezing the decoder and the memory projection layer W proj while training with an AE objective is crucial in learning meaningful encoder outputs. Denoising is important for datasets with longer inputs (Yahoo, Yelp), but not critical in datasets with shorter input lengths (PTB, SNLI). Mean-pooling encoder hidden states presents a trade-off between MI and AU. Max-pooling consistently learns more informative encoder representations. Changes in MI and AU during training is illustrated in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 503, "end": 511, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phase 1", "sec_num": "5.1" }, { "text": "Latent dimensions of 64 and 128 were also tested. Increasing the latent dimension did not necessarily boost representational quality in terms", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1", "sec_num": "5.1" }, { "text": "PPL\u2193 KL -ELBO\u2193 MI\u2191 AU\u2191 Optimus (\u03bb = 0.5) 23.11 17.45 301.21 8.85 32 GPT-2 (Radford et al., 2019) 22.00 ----Encoder pretraining (\u03bb = 3) and . KLT denotes KL thresholding with \u03bb = 3. Our models are finetuned from a pretrained 6-layer T5, except the deep variant with 12 layers.", "cite_spans": [ { "start": 74, "end": 96, "text": "(Radford et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "of AU percentage. For latent dimensions of 32 and 64, 90% of latent dimension units were activated in best-performing models. For latent dimension of 128, around 60% of latent units were active. Another interesting observation is that KL divergence on the validation set, although not part of the AE training objective, plateaus after repeated training. We regard this phenomenon as the signal of convergence in terms of representation quality. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We observe, as in previous literature, a trade-off between language modeling PPL and representation quality metrics (MI and AU). This trade-off is exacerbated when using KL thresholding. While KL thresholding does significantly increase latent representation capabilities, it is not in itself sufficient in preventing posterior collapse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2", "sec_num": "5.2" }, { "text": "Denoising and encoder pooling configurations display the same characteristics as in Phase 1. No version of the experiment existed where cyclical annealing schedule was able to prevent posterior collapse, a result not in accordance with . Figure 3 illustrates the training progression of Phase 2. We also experimented with increasing model depth from 6 layers to 12 layers. Our proposed two-phase training scheme prevents posterior collapse for deeper models as well, resulting in higher performance in most metrics compared to 6-layer models. Results are reported in Table 1 . Note that lower PPL does not necessarily indicate better language modeling capabilities, since models with collapsed posterior display better PPL.", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 246, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 567, "end": 574, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Phase 2", "sec_num": "5.2" }, { "text": "Rows with KL above zero indicate successful aversion of posterior collapse. In the literature, no consensus yet exists on the optimal value of KL in training VAEs. Overall, we find that a denoising scheme between 0.15 and 0.4 in both phases, coupled with a low (0.5) KL threshold strikes a good balance between reconstruction and latent representation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2", "sec_num": "5.2" }, { "text": "This paper explores common methods in the literature for combatting posterior collapse, and the extent to which they help in teaching latent information to pretrained Transformer models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Comprehensive experiments show that commonly employed posterior collapse mitigation techniques provide meaningful benefits in transforming existing language models into latent-aware architectures. Among the tested procedures, we find that 's two-step training, coupled with Shen et al. (2020) 's denoising through token deletion, was the most impactful in mitigating posterior collapse. However, language models obtained via only finetuning exhibit consistent trade-offs between their latent representation metrics (MI, AU) and language model metrics (PPL). Optimizing our model to be competitive with massively pretrained baselines in one of the two metrics results in the model falling behind in the other.", "cite_spans": [ { "start": 274, "end": 292, "text": "Shen et al. (2020)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "We also find that increasing training epochs further improves the impact of tested techniques, a result consistent with previous literature on largescale text VAE pretraining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "From our experiments, we identify several questions to be answered by future research. The impact of homogenizing finetuning (as suggested in this paper) and original pretraining objectives on language model metrics has to be further explored. While the original T5 architecture was also pretrained with a self-supervised denoising scheme, the model employs mask tokens for denoising, contrary to simple token deletions suggested by this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Our findings also highlight the need for an established heuristic to interpret the quality of latent representations learned by language models. The research community has yet to decide on the optimal value of KL-divergence between the assumed prior and the model posterior to target during text VAE training. Empirical guidelines to dictate even a vague threshold for the KL-divergence, below which we declare the occurrence of posterior collapse, will help both training and evaluation of latent-aware language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "A Phase 2 results on PTB, Yelp, and SNLI Model PPL\u2193 KL -ELBO\u2193 MI\u2191 AU\u2191 Optimus (\u03bb = 0.5) 26.69 15.72 96.82 7.64 32 GPT-2 (Radford et al., 2019) 24.23 ----Encoder pretraining (\u03bb = 3) Model PPL\u2193 KL -ELBO\u2193 MI\u2191 AU\u2191 Optimus (\u03bb = 0.5) 22.79 15.09 344.10 9.13 32 GPT-2 (Radford et al., 2019) 23.40 ----Encoder pretraining (\u03bb = 3) (Li et al., Model PPL\u2193 KL -ELBO\u2193 MI\u2191 AU\u2191 Optimus (\u03bb = 0.5) 16.67 16.35 38.50 8.89 32 GPT-2 (Radford et al., 2019) 20.24 ----Encoder pretraining (\u03bb = 3) ", "cite_spans": [ { "start": 120, "end": 142, "text": "(Radford et al., 2019)", "ref_id": "BIBREF17" }, { "start": 261, "end": 283, "text": "(Radford et al., 2019)", "ref_id": "BIBREF17" }, { "start": 322, "end": 333, "text": "(Li et al.,", "ref_id": null }, { "start": 413, "end": 435, "text": "(Radford et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "For all experiments we used a AdamW optimizer (Loshchilov and Hutter, 2019 ) with a starting learning rate of 1 \u00d7 10 \u22123 , \u03b2 1 = 0.9, \u03b2 2 = 0.999, and = 1 \u00d7 10 \u22123 . The linear KL annealing schedule we used was as follows:", "cite_spans": [ { "start": 46, "end": 74, "text": "(Loshchilov and Hutter, 2019", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "B Experimental details", "sec_num": null }, { "text": "KL weight = current global step steps per epoch * 50", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Experimental details", "sec_num": null }, { "text": "Our slower, linear KL annealing schedule of 0 to 1 over 50 epochs yielded better empirical results than the linear schdule used in Li et al. (2019) (0 to 1 over 10 epochs). We attribute this result to the small number of training samples in our experiments. We train for 5 epochs on Phase 1, and 3 epochs on Phase 2. While further training leads to increased MI and AU, we limit the number of epochs to confer to the spirit of this study, which is to learn latent representations with minimal training. The 5 epoch limit on Phase 1 was empirically determined as the point where encoder MI begins to plateau. Most experiments were conducted with z dimension of 32 for comparison with previous literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Experimental details", "sec_num": null }, { "text": "https://github.com/seongminp/ transformers-into-vaes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generating sentences from a continuous space", "authors": [ { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "10--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Importance weighted autoencoders", "authors": [ { "first": "Yuri", "middle": [], "last": "Burda", "suffix": "" }, { "first": "B", "middle": [], "last": "Roger", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Grosse", "suffix": "" }, { "first": "", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2016, "venue": "ICLR (Poster)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuri Burda, Roger B Grosse, and Ruslan Salakhutdinov. 2016. Importance weighted autoencoders. In ICLR (Poster).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Transformerbased conditional variational autoencoder for controllable story generation", "authors": [ { "first": "Le", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Chaochun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Liefeng", "middle": [], "last": "Bo", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Changyou", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.00828" ] }, "num": null, "urls": [], "raw_text": "Le Fang, Tao Zeng, Chaochun Liu, Liefeng Bo, Wen Dong, and Changyou Chen. 2021. Transformer- based conditional variational autoencoder for controllable story generation. arXiv preprint arXiv:2101.00828.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Cyclical annealing schedule: A simple approach to mitigating kl vanishing", "authors": [ { "first": "Hao", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "240--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. 2019. Cycli- cal annealing schedule: A simple approach to mit- igating kl vanishing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 240-250.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A probabilistic formulation of unsupervised text style transfer", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. A probabilistic formulation of unsupervised text style transfer. In International Conference on Learning Representations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "authors": [ { "first": "Irina", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Matthey", "suffix": "" }, { "first": "Arka", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Burgess", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Botvinick", "suffix": "" }, { "first": "Shakir", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Lerchner", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irina Higgins, Lo\u00efc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In 5th International Con- ference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "authors": [ { "first": "D", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Matthew J Johnson", "middle": [], "last": "Hoffman", "suffix": "" } ], "year": 2016, "venue": "Workshop in Advances in Approximate Bayesian Inference, NIPS", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew D Hoffman and Matthew J Johnson. 2016. Elbo surgery: yet another way to carve up the vari- ational evidence lower bound. In Workshop in Ad- vances in Approximate Bayesian Inference, NIPS, volume 1.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Semi-amortized variational autoencoders", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 35th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "2678--2687", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim, Sam Wiseman, Andrew Miller, David Son- tag, and Alexander Rush. 2018. Semi-amortized variational autoencoders. In Proceedings of the 35th International Conference on Machine Learning, vol- ume 80 of Proceedings of Machine Learning Re- search, pages 2678-2687. PMLR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Auto-Encoding Variational Bayes", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2014, "venue": "2nd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto- Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Con- ference Track Proceedings.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improved variational inference with inverse autoregressive flow", "authors": [ { "first": "P", "middle": [], "last": "Durk", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Max", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autore- gressive flow. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A surprisingly effective fix for deep latent variable modeling of text", "authors": [ { "first": "Bohan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3603--3614", "other_ids": { "DOI": [ "10.18653/v1/D19-1370" ] }, "num": null, "urls": [], "raw_text": "Bohan Li, Junxian He, Graham Neubig, Taylor Berg- Kirkpatrick, and Yiming Yang. 2019. A surprisingly effective fix for deep latent variable modeling of text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3603- 3614, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Optimus: Organizing sentences via pre-trained modeling of a latent space", "authors": [ { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Baolin", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4678--4699", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunyuan Li, Xiang Gao, Yuan Li, Baolin Peng, Xiu- jun Li, Yizhe Zhang, and Jianfeng Gao. 2020. Opti- mus: Organizing sentences via pre-trained modeling of a latent space. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 4678-4699.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A transformerbased variational autoencoder for sentence generation", "authors": [ { "first": "Danyang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Gongshen", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "2019 International Joint Conference on Neural Networks (IJCNN)", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danyang Liu and Gongshen Liu. 2019. A transformer- based variational autoencoder for sentence genera- tion. In 2019 International Joint Conference on Neu- ral Networks (IJCNN), pages 1-7. IEEE.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Preventing posterior collapse in sequence vaes with pooling. arXiv e-prints", "authors": [ { "first": "Teng", "middle": [], "last": "Long", "suffix": "" }, { "first": "Yanshuai", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Jackie Chi Kit", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teng Long, Yanshuai Cao, and Jackie Chi Kit Cheung. 2019. Preventing posterior collapse in sequence vaes with pooling. arXiv e-prints, pages arXiv- 1911.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Style transfer from non-parallel text by cross-alignment", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "6833--6844", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Proceedings of the 31st Inter- national Conference on Neural Information Process- ing Systems, NIPS'17, page 6833-6844, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Educating text autoencoders: Latent representation guidance via denoising", "authors": [ { "first": "Tianxiao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "8719--8729", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianxiao Shen, Jonas Mueller, Regina Barzilay, and Tommi Jaakkola. 2020. Educating text autoen- coders: Latent representation guidance via denois- ing. In International Conference on Machine Learn- ing, pages 8719-8729. PMLR.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Re-balancing variational autoencoder loss for molecule sequence generation", "authors": [ { "first": "Chaochao", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jinyu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tingyang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Junzhou", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB '20", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3388440.3412458" ] }, "num": null, "urls": [], "raw_text": "Chaochao Yan, Sheng Wang, Jinyu Yang, Tingyang Xu, and Junzhou Huang. 2020. Re-balancing variational autoencoder loss for molecule sequence generation. In Proceedings of the 11th ACM International Con- ference on Bioinformatics, Computational Biology and Health Informatics, BCB '20, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improved variational autoencoders for text modeling using dilated convolutions", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3881--3890", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved varia- tional autoencoders for text modeling using dilated convolutions. In Proceedings of the 34th Interna- tional Conference on Machine Learning -Volume 70, ICML'17, page 3881-3890. JMLR.org.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Phase 1 training on Yahoo. Labels are in the form {pooling strategy}_{denoise percent-age}_{decoder frozen}.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Phase 2 training on Yahoo. Labels are in the form {KL threshold}_{denoise percentage}. Encoder hidden states in plotted experiments were max-pooled.", "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "content": "", "text": "Phase 2 results on Yahoo. Due to space constraints, we report experimental results on other datasets in the appendix. Results on baselines are quoted from", "html": null, "num": null }, "TABREF3": { "type_str": "table", "content": "
", "text": "Phase 2 results on PTB", "html": null, "num": null }, "TABREF5": { "type_str": "table", "content": "
", "text": "Phase 2 results on Yelp.", "html": null, "num": null }, "TABREF7": { "type_str": "table", "content": "
", "text": "Phase 2 results on SNLI.", "html": null, "num": null } } } }