diff --git "a/PdFIT4oBgHgl3EQfeytv/content/tmp_files/load_file.txt" "b/PdFIT4oBgHgl3EQfeytv/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/PdFIT4oBgHgl3EQfeytv/content/tmp_files/load_file.txt" @@ -0,0 +1,672 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf,len=671 +page_content='BayesSpeech: A Bayesian Transformer Network for Automatic Speech Recognition Will Rieger Master of Science in Computer Science Department of Computer Science The University of Texas at Austin Abstract Recent developments using End-to-End Deep Learning models have been shown to have near or better performance than state of the art Recurrent Neural Networks (RNNs) on Automatic Speech Recognition tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' These models tend to be lighter weight and require less training time than traditional RNN-based approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' However, these models take frequentist approach to weight training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In theory, network weights are drawn from a latent, intractable probability distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We introduce BayesSpeech for end-to-end Automatic Speech Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' BayesSpeech is a Bayesian Transformer Network where these intractable posteriors are learned through variational inference and the local reparameterization trick without recurrence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We show how the introduction of variance in the weights leads to faster training time and near state-of-the-art perfor- mance on LibriSpeech-960.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Introduction In the majority of neural networks, randomness is usually introduced through perturbation of the input or randomly removing nodes from the network (Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' There has been great success using these methods across a variety of domains including Automatic Speech Recognition (Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Models continue to evolve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' However and data augmentation methods rarely take large leaps in terms of the features they can help express.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Newer models are generally larger and larger and require incredible amounts of compute to properly train.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We especially see this in the field of Automatic Speech Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Newer models such as Jasper (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2019), the Conformer (Gulati et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2020), LAS (Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2016), and the Transformer (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017) all require training for multiple days across multiple GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Creating deeper models can certainly help attain better performance on the domain task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' But what if we approach the model differently and try to leverage their probabilistic nature?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='11276v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='AS] 16 Jan 2023 Most Neural Network models take a frequentist approach to model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' As you in- troduce non-linearities and apply gradient methods to solving these optimization problems, we become less and less likely to know we have reached a true minima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In theory, our true weights are drawn from an intractable prior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' If we approach the problem through a Bayesian lens, we can better contextualize our model’s output and weights on the input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Using variational inference techniques, we can design a network whose weights are drawn from a learnable, tractable posterior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We present BayesSpeech;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' a Bayesian Trans- former Network for End-to-End Automatic Speech recognition where feed forward layers are contextualized with probability distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Background 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1 Automatic Speech Recognition Automatic Speech Recognition models have been evolving rapidly in recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Models can either be sub-domain specific and focus on speech representation (Mohamed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2012) (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2009) (Conneau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2020) (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2018) (Schneider et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2019) (Chung et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2019) (Maas, 2013) (Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2020) attention (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017) (Chorowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2015) (Conneau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2020) or be end-to-end and incorporate the aforementioned components into one model and jointly train them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1 Connectionist Transporal Classification In order to jointly train end-to-end model including alignment, and encoding/decoding the input/output sequence, Connectionist Transporal (CTC) Loss can be used to better manage the alignments (Graves et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Alignment of the input and output sequence becomes especially challenging in speech recognition tasks as the input sequence is generally longer than the output sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CTC Loss aids this process by penalizing models based on the joint probability of the current token in the sequence and all other tokens predicted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' For decoders that only output set-length sequences, we can further augment the CTC loss by utilizing traditional Cross Entropy loss on the predictions (Hori et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' By enabling a joint CTC and Cross Entropy (CE) loss function we not only penalize characters based on their sequence but also on their absolute positioning in the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' This is covered further in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2 Models for End-to-End Speech Recognition End-to-End Speech recognition models combine all of the individual aspects of Automatic Speech Recognition into one model that is trained jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Traditional models rely on RNNs, 2 LSTMs, and generally recurrences for defining the output sequence (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2022)(Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' These models are cumbersome to train and parallelize and create additional operational hurdles in properly tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Recently, new models involving convolutions and linear outputs have been used within Encoder-Decoder frameworks for end-to-end speech recognition tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' These models such as Jasper (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2019), Conformer (Gulati et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2020), and Transformer (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2018) are huge models with one billion or more parameters relying on feed-forward architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The three models were all trained for multiple days on multiple GPUs and required incred- ible compute power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The Speech-Transformer model (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2018) tried to address these issues by creating a thinner model to yield similar performance on the WSJ dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' However, compared to its larger counterparts, it did not have the same performance charac- teristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Although it did lend hope that smaller models could be trained to compete with their larger counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2 Bayesian Methods Bayesian models have begun to show further promise in multiple fields such as Image Recog- nition (Blundell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2015), attention mechanisms (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2021) (Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2020), and auto-encoders (Kingma & Welling, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In a Bayesian approach, networks weights are samples from an intractable distribution which we can estimate over training iterations through Variational Inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1 Variational Inference Variational Inference (VI) is the estimation of an intractable distribution through minimiz- ing the Kullback-Lieblier divergence (DKL) between a sample and some true distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Different works have shown that these estimation methods have value when applied to a Bayesian Neural Network (Graves, 2011) (Kingma & Welling, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' There are two different approaches to VI that largely depend on the what the true distribution is believed to be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' If the true prior can be any distribution Monte-Carlo sampling is the only option for estimating the gradient from DKL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' When using MC sampling, a network is sampled multiple times for the same input and the gradients are averaged together across the number of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' If the prior is generally assumed to be Gaussian, the KL Divergence can be explicitly calcu- lated and the Local Reparameterization Trick (Kingma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2015) can be used for finding the gradient with just one sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' This is discussed further in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2 Bayes by Backprop Blundell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' introduced the Bayes by Backprop algorithm for jointly learning the in- tractable distribution as well as a domain problem in a Bayesian Neural Network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' They introduce the joint loss function in two parts: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Weighting the KL Divergence of the model against the epoch 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Creating an Evidence Lower Bound (ELBO) on the loss function for the purpose of training We discuss the weighting of the KL Divergence loss term over time in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The introduction of ELBO loss serves as the method for tuning an efficient approximator for the Maximum Likelihood given an a-posteriori inference of the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Because we are always sampling from an intractable distribution, the KL divergence term can be thought of as a regularization constant on the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Over time, the KL divergence impact on loss will encourage the approximate posterior to be close to the true prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' While not readily apparent, ELBO loss is implicitly involved in the loss function described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Research & Methods As introduced above, sampling network weights (Bayesian approach) rather than explicitly defining them (frequentist approach) has been shown to have increased performance and faster convergence times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Our goal is to produce a network, leveraging Bayesian layers, to compete with state-of-the-art models and require less training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' BayesSpeech, is largely based on the no-recurrence model, the Transformer (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' While we leverage the model’s general architecture, we introduce modified Encoder and Decoder layers with Bayesian, Pointwise Feed Forward sub-layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In this section, we explore the core components of the model, the model’s architecture, and a new training methodology for an ensemble loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1 Core Components 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1 Attention Mechanisms Part of Vaswani et al.’s Transformer (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017) network was the introduction of Scaled Dot-Product attention and, further, Multi-Headed Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The goal of these mechanisms is to generate a temporal-rich representation of the inputs by attending to different positions within the input sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' An attention function maps a query (or set of queries) in a matrix, Q, and a set of key-value pairs in matrices, K, V , to the input 4 sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Scaled Dot-Product Attention first takes the softmax of the matrix multiplication of Q, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Then it matrix multiplies that value with V and normalizes by √dk (the dimension of the keys, K) (Equation 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The normalization by the key size is used to prevent the softmax function from suffering the vanishing gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Attention(Q, K, V ) = Softmax(QKT √dk )V (1) Scaled Dot-Product attention only performs a single attention function at a time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Multi-Head Attention addresses this by linearly projecting the queries, keys, and values h times to each of the input dimensions (dq, dk, dv, respectively).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Then Scaled Dot-Product attention is applied to these newly projected inputs in order to attend across the h different ”heads” (Equation 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Each headi is equal to Attention(QW Q i , KW K i , V W V i ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' This way the model jointly attends different input representations across the different subspaces introduced through the projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' MultiHead(Q, K, V ) = Concat(head1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', headh)W O (2) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2 Sinusoidal Positional Encoding Because the model does not have recurrence or convolutions, in order to make use of the temporal attentions from the Multi-Head modules, position information about the position of the tokens in the sequence must be introduced (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Positional Encodings are used in the both the Encoder and Decoder modules to align each module’s outputs and allow them to be summed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The model leverages Sinusoidal embeddings (Equation 3) where the frequency corresponds to the token position (pos) and the dimension (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' hypothesize that using this encoding function will make it easy for the model to learn the attention weights for relative positions and adapt for longer sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' PositionalEmbedding(pos,i) = � � � sin(pos/10000 2i dmodel ) if i < dmodel 2 cos(pos/10000 2i dmodel ) if i ≥ dmodel 2 (3) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='4 Proposal: Bayesian, Positionwise Feed Forward Layer Our primary proposal, is the Bayesian, Positionwise Feed Forward Layer (Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Rather than use two linear layers with dropout, we substitute the input layer with a Bayesian Linear Layer suplemented with the Local Reparameterization Trick (Bayes Linear LRT) (Kingma 5 et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2015) and remove the dropout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Other approaches to producing Bayesian components rely on sampling the same input sequence multiple times from a Monte-Carlo process in order to define an average gradient for modeling the intractable posterior (Blundell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In addition to being computationally expensive, this can also lead to high variance in the gradients increasing training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The Local Reparameterization Trick addresses this by assuming that both the prior and posterior are Gaussian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Using only a single sample, the KL divergence between the estimators can be explicitly solved for.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' This new estimator is efficient (as it has less computational complexity) and reduces the variance in the gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' For each Bayesian Layer in Figure 1, let din, dout be the input and output dimensions of the layer, respectiveley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We define matrices Wµ ∈ RdoutXdin and Wρ ∈ RdoutXdin representing the mean and variance scalar for each weight in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Similarly we have bias vectors for the output of Wµ and Wρ defined as bµ ∈ Rdout and bρ ∈ Rdout, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Finally, at each iteration we sample a term from a standard Gaussian, ϵ ∼ N(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In order to calculate the output of the layer, we define a function for explicitly calculating the KL divergence when the prior and posterior are both Gaussian (Equation 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The prior is represented by p and posterior represented by q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' A sum is taken over all elements in the input matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' KLD(µp, σp, µq, σq) = 1 2 � (2 log(σp σq ) − (1 + (σp σq )2) + (µp − µq σp )2) (4) In the forward pass of the algorithm, we first must use our variance parameter ρ for estimating our standard deviation of weight and biases (Equation 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The same applies for the bias Figure 1: Bayesian, Positionwise Feed Forward Layer Diagram 6 Activation (GELU Bayes Linear LRT Linearvector b (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' we arrive at a bσ using bρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Wσ = log(1 + eWρ) (5) Next we sample from our Gaussian and introduce variance in the weights and biases for the input sequence X (Equation 6, Equation 7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Wout = XW T µ + � (W 2 σ)TX ∗ ϵ (6) bout = XbT µ + bσ ∗ ϵ (7) Then we calculate the KL divergence between our estimated posteriors and true priors for the weights and biases: WKL = KLD(0, 1, Wµ, Wσ) and bKL = KLD(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1, bµ, bσ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Finally, the forward computation sets a global KL divergence term (KL = WKL + bKL) and returns Wout+bout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The KL divergence term is used in the join loss function for tuning our variational posterior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='4 Encoder Feature Extraction Although the original Transformer architecture does not involve any convolutions, recent work in the Image Recognition domain ((Simonyan & Zisserman, 2014)) has lent itself useful for sequence-to-sequence ASR tasks (Hori et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The modified VGG Network from (Hori et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017) is used in the Encoder to further enhance in the input feature set drawing ideas from unsupervised speech representation tasks as seen in (Chung et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2019), (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2009), and (Mohamed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The output from this initial convolutional layer is passed to the encoder layers in the final network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='5 BayesSpeech Model Putting this together, we arrive at our final model architecture (Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The model passes the input through an Encoder (Figure 2a) and then passes the encoder output through a Decoder (Figure 2b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In our model, we use 12 encoder block layers (de = 12) and 6 decoder block layers (dd = 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' These 18 inner layers contain a Mutli-Head attention block as well as a Bayesian Position-wise Feed Forward block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The model has a dimension of 512 and a feed forward dimension of 2148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2 Model Training In order to train our transformer model, we utilize a variation of the Bayes-By-Backprop algorithm (Blundell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2015) with a joint Connectionist Temporal Classification and 7 (a) BayesSpeech Encoder (b) BayesSpeech Decoder Figure 2: BayesSpeech Encoder and Decoder Diagrams Cross Entropy loss function (Joint CTC, CrossEntropy Loss).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The two-stage training is meant to: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' further tune the sampling mechanics for the variational posterior distribution the weights are drawn from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' and learn the temporal alignments and classification loss of the output tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We have found that trying to optimize each component separately leads to over-fitting in one of the domains of this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' If we choose a large step size and seek to minimize the aggregate KL divergence across the Bayesian layers, we cannot further learn the align- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' And if we choose a small step-size and learn the alignments, we introduce too much randomness in the output for our results to be meaningful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Therefore we introduce a scal- ing function, similar to the one in Bayes-by-Backprop for managing the tradeoff over epoch iterations (Minibatch Weighting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Our model was trained on the LibriSpeech-960 dataset (Panayotov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The utterances in the dataset were converted to Mel Spectrogram form with 80 channels a width of 20ms and a stride of 10ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1 Tuning Variational Posterior The Bayesian part of our model tries to fit a variational posterior distribution (qθ) to a true intractable posterior (p) for each of the weights in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In order to do so, we reduce the problem of fitting qθ to that of a Minimum Description Length (MDL) problem (Hinton & van Camp, 1993a) (Rissanen, 1978) (Hinton & van Camp, 1993b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' While we have introduced the explicit calculation of the Kullback-Leibler divergence between our variational posterior and true prior above (DKL(qθ||p)),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' it is important to conceptualize the divergence as the 8 VGG Extractor Linear Norm Sinusoidal Encoding Dropout Layer Norm Multi-Head Attention x de Encoder Layer(s) Bayesian Position-wise Feed Forward Norm LinearEmbedding Sinusoidal Encoding Layer Norm Multi-Head Attention Dropout Layer Norm C xdd Decoder Layer(s) Multi-Head Attention Layer Norm Norm Bayesian Position-wise Linear Feed ForwardMDL problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The Minimum Description Length principal is that the best model for a given dataset bal- ances the tradeoff between describing the model and describing the misfit between the model and the data (Hinton & van Camp, 1993b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The KL divergence criteria we use has the goal of keeping weights simple by penalizing the amount of information they contain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Ultimately, this methodology will lead to a better separation between prediction accuracy and model complexity and is explicitly differentiable (Graves, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The variational loss function used has two parts: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Error Loss - the expected value of negative log probability in samples from qθ(β) (where β are the model’s parameters) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Complexity Loss - the KL divergence between the tractable, variational posterior and the parameterized prior, DKL(qθ(β)||pα).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In each batch, we seek to gently tune our model’s variational posterior (qθ) to continue random sampling but isolate different weights that have different levels of kurtosis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Due to the minibatch weighting, discussed in a later section, we see a consistent decline in the joint loss value dominated by the KL divergence term (blue, Figure 3b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Blundell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' also show that using this relative kurtosis can create thinner models with an explicit scheme for weight pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Weights that are more leptokurtic are kept while platykurtic ones are discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' While this is beyond the scope of this paper, it would present and interesting future research case for the model presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (a) CTC Loss Over Training Iterations (b) Joint (Blue) and Scaled (Red) Loss Over Time Figure 3: Loss Functions over Training Iterations 9 CTCLossOverEpochs CTC LoSS CTCLoss(Smooth) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='5 0 1000 2000 3000 4000 Training IterationComparisonofScaledandJointLoss 1e7 1e7 7 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='57 6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='56 Scaled Loss (Red) Joint Loss (Blue) 5 4 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='55 3 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='54 2 1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='53 0 1000 2000 3000 4000 idx3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2 Joint CTC, CrossEntropy Loss In order to penalize the model for alignment of the input sequence to the output tokens, we utilize a joint Connectionist Transporal Classification (Graves et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2006) and Cross Entropy Loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The goal of this two term loss function is to manage a gradient through the alignment of tokens in the feature input (CTC) as well as the actual classification loss of the aligned output and the true tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The loss functions weights the two as so: L(X) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='3 ∗ CTC(X) + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='7 ∗ CE(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We do this in order to help smooth out the gradient while maintaining the proper loss to back-propagate through the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Due to the adversarial nature of the Bayesian outputs, we find that this joint loss descends rapidly then continues to descend without adjustment to the original learning rate (Figure 3a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In our training, we held the learning rate fixed at 10−6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='3 Minibatch Weighting Blundell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' found that earlier epochs have a greater importance on tuning of the varia- tional posterior than later ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We adopt a similar methodology where we weight the KL divergence term according to the epoch (e) and number of epochs (ne) (Equation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' MinibatchWeight(e, ne) = 2ne−e 2ne − e (8) To better aid training over time, we choose an epoch indexer where the epoch index is integer divided by 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' When the training loop runs for multiple hours, this helps keep the KL divergence more heavily weighted at first.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We then weight the KL divergence term by the minibatch weight term (Equation 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The KLdiv term is the sum of all KL divergences over the Bayesian layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' L(X, e, ne) = MinibatchWeight(e, ne) ∗ KLdiv + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='3 ∗ CTC(X) + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='7 ∗ CE(X) (9) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Results We split our model into two variants: one that outputs a character sequence and one that outputs tokenized word-pieces from a Sentencepiece language model with vocab size of 1000 (Kudo & Richardson, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We trained each model variant on a single A-100 GPU through Google Colab for 8 hours with a batchsize of 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' As shown in Table 1, our model performs nearly as well as the state of the art ASR models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Our Bayes speech model reaches respectable Word Error Rates with and without a language model on the LibriSpeech dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The Bayes Model as well was trained for just 8 hours on 10 Model WER (w/o LM) WER (w/ LM) test-clean test-other test-clean test-other LAS (Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2016) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='89% 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='98% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='33% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='17% Transformer (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2017) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='4% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='6% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='6% Conformer (Gulati et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2020) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='3% BayesSpeech 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='5% 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='5% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='7% Table 1: WER Results on LibriSpeech dataset a single GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' For instance, the Conformer model was trained over the course of multiple days on multiple GPUs (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' During evaluation, we use beam search with a beam width of 10 over the set of possible decoded sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' This appears to be the standard decoding methodology giving the probabilistic output of the model’s decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' When the input sequence passes through our Bayesian feed forward layers, we believe this creates an adversarial input stream.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Rather than artificially augment the input Mel Spec- trogram inputs (Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', 2019), these layers produce a probabilistic feature encoding of the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We believe that this general adversarial training technique allows our model to converge faster with less training time and resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' The randomness introduced in the model also helps better contextualize outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' As we continue to tune the variational poste- rior over the weights, I imagine we would see a dramatic increase in performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Because our model yielded reasonable results after 8 hours, we stopped training but future work may investigate if increased training could further improve our performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' There may also be a benefit to equally weighting the variational component and the CTC loss component of the global loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Similarly, in future work it may be useful to explore systematic model pruning as presented in Blundell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Conclusion Currently, best in class Automatic Speech Recognition solutions require multiple days of training on multiple GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' These models also take a frequentist approach to weight training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In this work, we present BayesSpeech;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' a Bayesian Transformer Network for learning an intractable posterior distribution over which weights are drawn in feed forward layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' We believe this probabilistic encoding of the input feature set creates a better representation of the input Mel Spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' This mechanic in conjunction with a joint loss function yields near state-of-the-art results on the LibriSpeech dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 11 References Baevski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Zhou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Auli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0: A framework for self-supervised learning of speech representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CoRR, abs/2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='11477.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='11477 Blundell, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Cornebise, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Wierstra, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Weight uncertainty in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='05424 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='05424 Chan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Jaitly, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Listen, attend and spell: A neural network for large vocabulary conversational speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In 2016 ieee interna- tional conference on acoustics, speech and signal processing (icassp) (p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 4960-4964).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1109/ICASSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='7472621 Chorowski, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Bahdanau, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Serdyuk, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Attention- based models for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CoRR, abs/1506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='07503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from http:// arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='07503 Chung, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Hsu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Tang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Glass, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' An Unsupervised Autoregressive Model for Speech Representation Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' interspeech 2019 (pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 146–150).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='21437/Interspeech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2019-1473 Conneau, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Baevski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Collobert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Auli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Unsupervised cross-lingual representation learning for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CoRR, abs/2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='13979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='13979 Devlin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Toutanova, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' BERT: pre-training of deep bidi- rectional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CoRR, abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='04805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='04805 Dong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Xu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Speech-transformer: A no-recurrence sequence- to-sequence model for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In 2018 ieee international conference on acoustics, speech and signal processing (icassp) (p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 5884-5888).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1109/ ICASSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='8462506 Fan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Zhou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Bayesian attention modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='10604 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2010 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='10604 Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Practical variational inference for neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Shawe-Taylor, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Zemel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Bartlett, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Pereira, & K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Weinberger (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' ), Advances in neural information processing systems (Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 24).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Curran Asso- ciates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='cc/paper/2011/file/ 7eb3c8be3d411e8ebfab08eba5f49632-Paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='pdf 12 Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Fern´andez, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Gomez, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In Proceedings of the 23rd international conference on machine learning (p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 369–376).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' New York, NY, USA: Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https:// doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1145/1143844.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1143891 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1145/1143844.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1143891 Gulati, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Conformer: Convolution-augmented transformer for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Srivastava, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Krizhevsky, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Salakhutdinov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Improving neural networks by preventing co-adaptation of feature detectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CoRR, abs/1207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='0580 Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & van Camp, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (1993a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Keeping the neural networks simple by minimizing the description length of the weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In Proceedings of the sixth annual conference on computational learning theory (p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 5–13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' New York, NY, USA: Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1145/168304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='168306 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1145/168304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='168306 Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & van Camp, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (1993b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Keeping the neural networks simple by minimizing the description length of the weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In Proceedings of the sixth annual conference on computational learning theory (p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 5–13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' New York, NY, USA: Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1145/168304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='168306 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1145/168304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='168306 Hori, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Watanabe, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Chan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Advances in joint ctc-attention based end-to-end speech recognition with a deep cnn encoder and rnn-lm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In Interspeech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Welling, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Variational dropout and the local repa- rameterization trick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Cortes, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Lawrence, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Lee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Sugiyama, & R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Garnett (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' ), Advances in neural information processing systems (Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 28).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Curran Asso- ciates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='cc/paper/2015/file/ bc7316929fe1545bf0b98d114ee3ecb8-Paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='pdf Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Welling, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Auto-encoding variational bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='6114 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='6114 Kudo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Richardson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CoRR, abs/1808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='06226.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='06226 Lee, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Pham, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Largman, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Ng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Unsupervised feature learning for audio classification using convolutional deep belief networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Bengio, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Schuurmans, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Lafferty, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Williams, & A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Culotta (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' ), Advances in neural information process- ing systems (Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://proceedings 13 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='cc/paper/2009/file/a113c1ecd3cace2237256f4c712f61b5-Paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='pdf Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Lavrukhin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Ginsburg, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Leary, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Kuchaiev, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Cohen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Gadde, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Jasper: An end-to-end convolutional neural acoustic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='03288 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='03288 Liu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Hsu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Auli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Baevski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Towards end-to-end unsupervised speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='02492 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='02492 Maas, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Rectifier nonlinearities improve neural network acoustic models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='. Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Dahl, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Acoustic modeling using deep belief networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' IEEE Transactions on Audio, Speech, and Language Processing, 20(1), 14- 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1109/TASL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2109382 Panayotov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Chen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Povey, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Khudanpur, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Librispeech: An asr corpus based on public domain audio books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In 2015 ieee international conference on acoustics, speech and signal processing (icassp) (p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 5206-5210).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1109/ ICASSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='7178964 Park, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Chan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Chiu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Zoph, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Cubuk, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2019, sep).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' SpecAugment: A simple data augmentation method for automatic speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In Interspeech 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' ISCA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='21437% 2Finterspeech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2019-2680 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='21437/interspeech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2019-2680 Rissanen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (1978).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Modeling by shortest data description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Automatica, 14(5), 465-471.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='com/science/article/pii/ 0005109878900055 doi: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1016/0005-1098(78)90005-5 Schneider, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Baevski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Collobert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Auli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' wav2vec: Unsupervised pre-training for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' CoRR, abs/1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='05862.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from http:// arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='05862 Simonyan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Zisserman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Very deep convolutional networks for large-scale image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1556 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='1556 Vaswani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Parmar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Uszkoreit, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Jones, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Gomez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Polosukhin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' In I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Guyon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' ), Advances in neural information processing systems (Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' 30).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Curran Asso- ciates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='cc/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='pdf Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Fan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=', & Zhou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Bayesian attention belief networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content=' Retrieved from https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='org/abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='05251 doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='48550/ARXIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='2106 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'} +page_content='05251 14' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/PdFIT4oBgHgl3EQfeytv/content/2301.11276v1.pdf'}