gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
sequence
paper_headers
sequence
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-108#paper-1285#slide-0
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-0
Background
Assumption: different languages can be mapped into one shared-latent space
Assumption: different languages can be mapped into one shared-latent space
[]
GEM-SciDuet-train-108#paper-1285#slide-1
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-1
Techniques based on
Initialize the model with inferred bilingual dictionary Unsupervised word embedding mapping Learn strong language model Convert Unsupervised setting into a supervised one Constrain the latent representation produced by encoders to a shared space fully-shared encoder fixed mapped embedding GAN
Initialize the model with inferred bilingual dictionary Unsupervised word embedding mapping Learn strong language model Convert Unsupervised setting into a supervised one Constrain the latent representation produced by encoders to a shared space fully-shared encoder fixed mapped embedding GAN
[]
GEM-SciDuet-train-108#paper-1285#slide-2
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-2
We find
The shared encoder is a bottleneck for unsupervised NMT The shared encoder is weak in keeping the unique and internal characteristics of each language, such as the style, terminology and sentence structure. Since each language has its own characteristics, the source and target language should be encoded and learned independently. Fixed word embedding also weakens the performance (not included in the paper) If you are interested about this part, you can find some discussions in our github code:
The shared encoder is a bottleneck for unsupervised NMT The shared encoder is weak in keeping the unique and internal characteristics of each language, such as the style, terminology and sentence structure. Since each language has its own characteristics, the source and target language should be encoded and learned independently. Fixed word embedding also weakens the performance (not included in the paper) If you are interested about this part, you can find some discussions in our github code:
[]
GEM-SciDuet-train-108#paper-1285#slide-3
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-3
The proposed model
The local GAN is utilized to constrain the source and target latent representations to have the same distribution (embedding-reinforced encoder is also designed for this purpose, see our paper for detail). The global GAN is utilized to fine tune the whole model.
The local GAN is utilized to constrain the source and target latent representations to have the same distribution (embedding-reinforced encoder is also designed for this purpose, see our paper for detail). The global GAN is utilized to fine tune the whole model.
[]
GEM-SciDuet-train-108#paper-1285#slide-4
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-4
Experiment setup
Note: The monolingual data is built by selecting the front half of the source language and the back half of the target language. 4 self-attention layers for encoder and decoder applying the Word2vec to pre-train the word embedding utilizing Vecmap to map these embedding to a shared-latent space
Note: The monolingual data is built by selecting the front half of the source language and the back half of the target language. 4 self-attention layers for encoder and decoder applying the Word2vec to pre-train the word embedding utilizing Vecmap to map these embedding to a shared-latent space
[]
GEM-SciDuet-train-108#paper-1285#slide-5
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-5
Experimental results
The effects of the weight-sharing layer number Sharing one layer achieves the best translation performance. The BLEU results of the proposed model: Baseline 1: the word-by-word translation according to the similarity of the word embedding Baseline 2: unsupervised NMT with monolingual corpora only proposed by Facebook. Upper Bound: the supervised translation on the same model. We perform an ablation study by training multiple versions of our model with some missing components: the local GAN, global GAN, the directional self-attention, the weight-sharing and the embedding-reinforced encoder. We do not test the importance of the auto-encoding, back-translation and the pre-trained embeddings since they have been widely tested in previous works.
The effects of the weight-sharing layer number Sharing one layer achieves the best translation performance. The BLEU results of the proposed model: Baseline 1: the word-by-word translation according to the similarity of the word embedding Baseline 2: unsupervised NMT with monolingual corpora only proposed by Facebook. Upper Bound: the supervised translation on the same model. We perform an ablation study by training multiple versions of our model with some missing components: the local GAN, global GAN, the directional self-attention, the weight-sharing and the embedding-reinforced encoder. We do not test the importance of the auto-encoding, back-translation and the pre-trained embeddings since they have been widely tested in previous works.
[]
GEM-SciDuet-train-108#paper-1285#slide-6
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-6
Semi supervised NMT with 02M parallel data
Continue training the model after unsupervised training on the From scratch, training the model on monolingual data for one epoch, and then on parallel data for one epoch, and another one on monolingual data, on and on. Only with parallel data Continuing Training on supervised data Jointly training on monolingual and parallel data
Continue training the model after unsupervised training on the From scratch, training the model on monolingual data for one epoch, and then on parallel data for one epoch, and another one on monolingual data, on and on. Only with parallel data Continuing Training on supervised data Jointly training on monolingual and parallel data
[]
GEM-SciDuet-train-108#paper-1285#slide-7
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-7
Related works
Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations (ICLR). Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. Phrase-Based & Neural Unsupervised Machine Translation (arxiv) * The newest paper (third one) proposes the shared BPE method for unsupervised NMT, its effectiveness is to be verified (around +10 BLEU points improvement is presented).
Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations (ICLR). Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. Phrase-Based & Neural Unsupervised Machine Translation (arxiv) * The newest paper (third one) proposes the shared BPE method for unsupervised NMT, its effectiveness is to be verified (around +10 BLEU points improvement is presented).
[]
GEM-SciDuet-train-108#paper-1285#slide-8
1285
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208 ], "paper_content_text": [ "Introduction Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; , directly applying a single neural network to transform the source sentence into the target sentence, has now reached impressive performance (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .", "The NMT typically consists of two sub neural networks.", "The encoder network reads and encodes the source sentence into a 1 Feng Wang is the corresponding author of this paper context vector, and the decoder network generates the target sentence iteratively based on the context vector.", "NMT can be studied in supervised and unsupervised learning settings.", "In the supervised setting, bilingual corpora is available for training the NMT model.", "In the unsupervised setting, we only have two independent monolingual corpora with one for each language and there is no bilingual training example to provide alignment information for the two languages.", "Due to lack of alignment information, the unsupervised NMT is considered more challenging.", "However, this task is very promising, since the monolingual corpora is usually easy to be collected.", "Motivated by recent success in unsupervised cross-lingual embeddings (Artetxe et al., 2016; Zhang et al., 2017b; Conneau et al., 2017) , the models proposed for unsupervised NMT often assume that a pair of sentences from two different languages can be mapped to a same latent representation in a shared-latent space Artetxe et al., 2017b) .", "Following this assumption, use a single encoder and a single decoder for both the source and target languages.", "The encoder and decoder, acting as a standard auto-encoder (AE), are trained to reconstruct the inputs.", "And Artetxe et al.", "(2017b) utilize a shared encoder but two independent decoders.", "With some good performance, they share a glaring defect, i.e., only one encoder is shared by the source and target languages.", "Although the shared encoder is vital for mapping sentences from different languages into the shared-latent space, it is weak in keeping the uniqueness and internal characteristics of each language, such as the style, terminology and sentence structure.", "Since each language has its own characteristics, the source and target languages should be encoded and learned independently.", "Therefore, we conjecture that the shared encoder may be a factor limit-ing the potential translation performance.", "In order to address this issue, we extend the encoder-shared model, i.e., the model with one shared encoder, by leveraging two independent encoders with each for one language.", "Similarly, two independent decoders are utilized.", "For each language, the encoder and its corresponding decoder perform an AE, where the encoder generates the latent representations from the perturbed input sentences and the decoder reconstructs the sentences from the latent representations.", "To map the latent representations from different languages to a shared-latent space, we propose the weightsharing constraint to the two AEs.", "Specifically, we share the weights of the last few layers of two encoders that are responsible for extracting highlevel representations of input sentences.", "Similarly, we share the weights of the first few layers of two decoders.", "To enforce the shared-latent space, the word embeddings are used as a reinforced encoding component in our encoders.", "For cross-language translation, we utilize the backtranslation following .", "Additionally, two different generative adversarial networks (GAN) , namely the local and global GAN, are proposed to further improve the cross-language translation.", "We utilize the local GAN to constrain the source and target latent representations to have the same distribution, whereby the encoder tries to fool a local discriminator which is simultaneously trained to distinguish the language of a given latent representation.", "We apply the global GAN to finetune the corresponding generator, i.e., the composition of the encoder and decoder of the other language, where a global discriminator is leveraged to guide the training of the generator by assessing how far the generated sentence is from the true data distribution 1 .", "In summary, we mainly make the following contributions: • We propose the weight-sharing constraint to unsupervised NMT, enabling the model to utilize an independent encoder for each language.", "To enforce the shared-latent space, we also propose the embedding-reinforced encoders and two different GANs for our model.", "• We conduct extensive experiments on 1 The code that we utilized to train and evaluate our models can be found at https://github.com/ZhenYangIACAS/unsupervised-NMT English-German, English-French and Chinese-to-English translation tasks.", "Experimental results show that the proposed approach consistently achieves great success.", "• Last but not least, we introduce the directional self-attention to model temporal order information for the proposed model.", "Experimental results reveal that it deserves more efforts for researchers to investigate the temporal order information within self-attention layers of NMT.", "Related Work Several approaches have been proposed to train N-MT models without direct parallel corpora.", "The scenario that has been widely investigated is one where two languages have little parallel data between them but are well connected by one pivot language.", "The most typical approach in this scenario is to independently translate from the source language to the pivot language and from the pivot language to the target language (Saha et al., 2016; Cheng et al., 2017) .", "To improve the translation performance, Johnson et al.", "(2016) propose a multilingual extension of a standard NMT model and they achieve substantial improvement for language pairs without direct parallel training data.", "Recently, motivated by the success of crosslingual embeddings, researchers begin to show interests in exploring the more ambitious scenario where an NMT model is trained from monolingual corpora only.", "and Artetxe et al.", "(2017b) simultaneously propose an approach for this scenario, which is based on pre-trained cross lingual embeddings.", "utilizes a single encoder and a single decoder for both languages.", "The entire system is trained to reconstruct its perturbed input.", "For cross-lingual translation, they incorporate back-translation into the training procedure.", "Different from , Artetxe et al.", "(2017b) use two independent decoders with each for one language.", "The two works mentioned above both use a single shared encoder to guarantee the shared latent space.", "However, a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.", "Our work also belongs to this more ambitious scenario, and to the best of our knowledge, we are one among the first endeavors to investigate how to train an NMT model with monolingual corpora only.", "is the translation in reversed direction.", "D l is utilized to assess whether the hidden representation of the encoder is from the source or target language.", "D g1 and D g2 are used to evaluate whether the translated sentences are realistic for each language respectively.", "Z represents the shared-latent space.", "3 The Approach Model Architecture The model architecture, as illustrated in figure 1 , is based on the AE and GAN.", "It consists of seven sub networks: including two encoders Enc s and Enc t , two decoders Dec s and Dec t , the local discriminator D l , and the global discriminators D g1 and D g2 .", "For the encoder and decoder, we follow the newly emerged Transformer (Vaswani et al., 2017) .", "Specifically, the encoder is composed of a stack of four identical layers 2 .", "Each layer consists of a multi-head self-attention and a simple position-wise fully connected feed-forward network.", "The decoder is also composed of four identical layers.", "In addition to the two sub-layers in each encoder layer, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack.", "For more details about the multi-head self-attention layer, we refer the reader to (Vaswani et al., 2017) .", "We implement the local discriminator as a multi-layer perceptron and implement the global discriminator based on the convolutional neural network (CNN).", "Several ways exist to interpret the roles of the sub networks are summarised in table 1.", "The proposed system has several striking components , which are critical either for the system to be trained in an unsu-2 The layer number is selected according to our preliminary experiment, which is presented in appendix ??.", "pervised manner or for improving the translation performance.", "Networks Roles Table 1 : Interpretation of the roles for the subnetworks in the proposed system.", "{Enc s , Dec s } AE for source language {Enc t , Dec t } AE for target language {Enc s , Dec t } translation source → target {Enc t , Dec s } translation target → source {Enc s , D l } 1st local GAN (GAN l1 ) {Enc t , D l } 2nd local GAN (GAN l2 ) {Enc t , Dec s , D g1 } 1st global GAN (GAN g1 ) {Enc s , Dec t , D g2 } 2nd global GAN (GAN g2 ) Directional self-attention Compared to recurrent neural network, a disadvantage of the simple self-attention mechanism is that the temporal order information is lost.", "Although the Transformer applies the positional encoding to the sequence before processed by the self-attention, how to model temporal order information within an attention is still an open question.", "Following (Shen et al., 2017) , we build the encoders in our model on the directional self-attention which utilizes the positional masks to encode temporal order information into attention output.", "More concretely, two positional masks, namely the forward mask M f and backward mask M b , are calculated as: M f ij = 0 i < j −∞ otherwise (1) M b ij = 0 i > j −∞ otherwise (2) With the forward mask M f , the later token only makes attention connections to the early tokens in the sequence, and vice versa with the backward mask.", "Similar to (Zhou et al., 2016; , we utilize a self-attention network to process the input sequence in forward direction.", "The output of this layer is taken by an upper self-attention network as input, processed in the reverse direction.", "Weight sharing Based on the shared-latent space assumption, we apply the weight sharing constraint to relate the two AEs.", "Specifically, we share the weights of the last few layers of the Enc s and Enc t , which are responsible for extracting high-level representations of the input sentences.", "Similarly, we also share the first few layers of the Dec s and Dec t , which are expected to decode high-level representations that are vital for reconstructing the input sentences.", "Compared to (Cheng et al., 2016; Saha et al., 2016) which use the fully shared encoder, we only share partial weights for the encoders and decoders.", "In the proposed model, the independent weights of the two encoders are expected to learn and encode the hidden features about the internal characteristics of each language, such as the terminology, style, and sentence structure.", "The shared weights are utilized to map the hidden features extracted by the independent weights to the shared-latent space.", "Embedding reinforced encoder We use pretrained cross-lingual embeddings in the encoders that are kept fixed during training.", "And the fixed embeddings are used as a reinforced encoding component in our encoder.", "Formally, given the input sequence embedding vectors E = {e 1 , .", ".", ".", ", e t } and the initial output sequence of the encoder stack H = {h 1 , .", ".", ".", ", h t }, we compute H r as: H r = g H + (1 − g) E (3) where H r is the final output sequence of the encoder which will be attended by the decoder (In Transformer, H is the final output of the encoder), g is a gate unit and computed as: g = σ(W 1 E + W 2 H + b) (4) where W 1 , W 2 and b are trainable parameters and they are shared by the two encoders.", "The motivation behind is twofold.", "Firstly, taking the fixed cross-lingual embedding as the other encoding component is helpful to reinforce the sharedlatent space.", "Additionally, from the point of multichannel encoders (Xiong et al., 2017) , providing encoding components with different levels of composition enables the decoder to take pieces of source sentence at varying composition levels suiting its own linguistic structure.", "Unsupervised Training Based on the architecture proposed above, we train the NMT model with the monolingual corpora only using the following four strategies: Denoising auto-encoding Firstly, we train the two AEs to reconstruct their inputs respectively.", "In this form, each encoder should learn to compose the embeddings of its corresponding language and each decoder is expected to learn to decompose this representation into its corresponding language.", "Nevertheless, without any constraint, the AE quickly learns to merely copy every word one by one, without capturing any internal structure of the language involved.", "To address this problem, we utilize the same strategy of denoising AE (Vincent et al., 2008) and add some noise to the input sentences (Hill et al., 2016; Artetxe et al., 2017b) .", "To this end, we shuffle the input sentences randomly.", "Specifically, we apply a random permutation ε to the input sentence, verifying the condition: |ε(i) − i| ≤ min(k([ steps s ] + 1), n), ∀i ∈ {1, n} (5) where n is the length of the input sentence, steps is the global steps the model has been updated, k and s are the tunable parameters which can be set by users beforehand.", "This way, the system needs to learn some useful structure of the involved languages to be able to recover the correct word order.", "In practice, we set k = 2 and s = 100000.", "Back-translation In spite of denoising autoencoding, the training procedure still involves a single language at each time, without considering our final goal of mapping an input sentence from the source/target language to the target/source language.", "For the cross language training, we utilize the back-translation approach for our unsupervised training procedure.", "Back-translation has shown its great effectiveness on improving NMT model with monolingual data and has been widely investigated by (Sennrich et al., 2015a; Zhang and Zong, 2016) .", "In our approach, given an input sentence in a given language, we apply the corresponding encoder and the decoder of the other language to translate it to the other language 3 .", "By combining the translation with its original sentence, we get a pseudo-parallel corpus which is utilized to train the model to reconstruct the original sentence from its translation.", "Local GAN Although the weight sharing constraint is vital for the shared-latent space assumption, it alone does not guarantee that the corresponding sentences in two languages will have the same or similar latent code.", "To further enforce the shared-latent space, we train a discriminative neural network, referred to as the local discriminator, to classify between the encoding of source sentences and the encoding of target sentences.", "The local discriminator, implemented as a multilayer perceptron with two hidden layers of size 256, takes the output of the encoder, i.e., H r calculated as equation 3, as input, and produces a binary prediction about the language of the input sentence.", "The local discriminator is trained to predict the language by minimizing the following crossentropy loss: L D l (θ D l ) = − E x∈xs [log p(f = s|Enc s (x))] − E x∈xt [log p(f = t|Enc t (x))] (6) where θ D l represents the parameters of the local discriminator and f ∈ {s, t}.", "The encoders are trained to fool the local discriminator: L Encs (θ Encs ) = − E x∈xs [log p(f = t|Enc s (x))] (7) L Enct (θ Enct ) = − E x∈xt [log p(f = s|Enc t (x))] (8) where θ Encs and θ Enct are the parameters of the two encoders.", "Global GAN We apply the global GANs to fine tune the whole model so that the model is able to generate sentences undistinguishable from the true data, i.e., sentences in the training corpus.", "Different from the local GANs which updates the parameters of the encoders locally, the global GANs are utilized to update the whole parameters of the proposed model, including the parameters of encoders and decoders.", "The proposed model has two global GANs: GAN g1 and GAN g2 .", "In GAN g1 , the Enc t and Dec s act as the generator, which generates the sentencex t 4 from x t .", "The D g1 , implemented based on CNN, assesses whether the generated sentencex t is the true target-language sentence or the generated sentence.", "The global discriminator aims to distinguish among the true sentences and generated sentences, and it is trained to minimize its classification error rate.", "During training, the D g1 feeds back its assessment to finetune the encoder Enc t and decoder Dec s .", "Since the machine translation is a sequence generation problem, following , we leverage policy gradient reinforcement training to back-propagate the assessment.", "We apply a similar processing to GAN g2 (The details about the architecture of the global discriminator and the training procedure of the global GANs can be seen in appendix ??", "and ??).", "There are two stages in the proposed unsupervised training.", "In the first stage, we train the proposed model with denoising auto-encoding, backtranslation and the local GANs, until no improvement is achieved on the development set.", "Specifically, we perform one batch of denoising autoencoding for the source and target languages, one batch of back-translation for the two languages, and another batch of local GAN for the two languages.", "In the second stage, we fine tune the proposed model with the global GANs.", "Experiments and Results We evaluate the proposed approach on English-German, English-French and Chinese-to-English translation tasks 5 .", "We firstly describe the datasets, pre-processing and model hyper-parameters we used, then we introduce the baseline systems, and finally we present our experimental results.", "Data Sets and Preprocessing In English-German and English-French translation, we make our experiments comparable with previous work by using the datasets from the 4 Thext isx Enc t −Decs t in figure 1.", "We omit the superscript for simplicity.", "5 The reason that we do not conduct experiments on English-to-Chinese translation is that we do not get public test sets for English-to-Chinese.", "WMT 2014 and WMT 2016 shared tasks respectively.", "For Chinese-to-English translation, we use the datasets from LDC, which has been widely utilized by previous works (Tu et al., 2017; Zhang et al., 2017a) .", "WMT14 English-French Similar to , we use the full training set of 36M sentence pairs and we lower-case them and remove sentences longer than 50 words, resulting in a parallel corpus of about 30M pairs of sentences.", "To guarantee no exact correspondence between the source and target monolingual sets, we build monolingual corpora by selecting English sentences from 15M random pairs, and selecting the French sentences from the complementary set.", "Sentences are encoded with byte-pair encoding (Sennrich et al., 2015b) , which has an English vocabulary of about 32000 tokens, and French vocabulary of about 33000 tokens.", "We report results on newstest2014.", "WMT16 English-German We follow the same procedure mentioned above to create monolingual training corpora for English-German translation, and we get two monolingual training data of 1.8M sentences each.", "The two languages share a vocabulary of about 32000 tokens.", "We report results on newstest2016.", "LDC Chinese-English For Chinese-to-English translation, our training data consists of 1.6M sentence pairs randomly extracted from LDC corpora 6 .", "Since the data set is not big enough, we just build the monolingual data set by randomly shuffling the Chinese and English sentences respectively.", "In spite of the fact that some correspondence between examples in these two monolingual sets may exist, we never utilize this alignment information in our training procedure (see Section 3.2).", "Both the Chinese and English sentences are encoded with byte-pair encoding.", "We get an English vocabulary of about 34000 tokens, and Chinese vocabulary of about 38000 tokens.", "The results are reported on NIST 02.", "Since the proposed system relies on the pretrained cross-lingual embeddings, we utilize the monolingual corpora described above to train the embeddings for each language independently by using word2vec (Mikolov et al., 2013) .", "We then apply the public implementation 7 of the method proposed by (Artetxe et al., 2017a) to map these 6 LDC2002L27, LDC2002T01, LDC2002E18, LD-C2003E07, LDC2004T08, LDC2004E12, LDC2005T10 7 https://github.com/artetxem/vecmap embeddings to a shared-latent space 8 .", "Model Hyper-parameters and Evaluation Following the base model in (Vaswani et al., 2017) , we set the dimension of word embedding as 512, dropout rate as 0.1 and the head number as 8.", "We use beam search with a beam size of 4 and length penalty α = 0.6.", "The model is implemented in TensorFlow (Abadi et al., 2015) and trained on up to four K80 GPUs synchronously in a multi-GPU setup on a single machine.", "For model selection, we stop training when the model achieves no improvement for the tenth evaluation on the development set, which is comprised of 3000 source and target sentences extracted randomly from the monolingual training corpora.", "Following , we translate the source sentences to the target language, and then translate the resulting sentences back to the source language.", "The quality of the model is then evaluated by computing the BLEU score over the original inputs and their reconstructions via this two-step translation process.", "The performance is finally averaged over two directions, i.e., from source to target and from target to source.", "BLEU (Papineni et al., 2002) is utilized as the evaluation metric.", "For Chinese-to-English, we apply the script mteval-v11b.pl to evaluate the translation performance.", "For English-German and English-French, we evaluate the translation performance with the script multi-belu.pl 9 .", "Baseline Systems Word-by-word translation (WBW) The first baseline we consider is a system that performs word-by-word translations using the inferred bilingual dictionary.", "Specifically, it translates a sentence word-by-word, replacing each word with its nearest neighbor in the other language.", "Lample et al.", "(2017) The second baseline is a previous work that uses the same training and testing sets with this paper.", "Their model belongs to the standard attention-based encoder-decoder framework, which implements the encoder using a bidirectional long short term memory network (LST-M) and implements the decoder using a simple forward LSTM.", "They apply one single encoder and en-de de-en en-fr fr-en zh-en are copied directly from their paper.", "We do not present the results of (Artetxe et al., 2017b) since we use different training sets.", "decoder for the source and target languages.", "Supervised training We finally consider exactly the same model as ours, but trained using the standard cross-entropy loss on the original parallel sentences.", "This model can be viewed as an upper bound for the proposed unsupervised model.", "Results and Analysis Number of weight-sharing layers We firstly investigate how the number of weightsharing layers affects the translation performance.", "In this experiment, we vary the number of weightsharing layers in the AEs from 0 to 4.", "Sharing one layer in AEs means sharing one layer for the encoders and in the meanwhile, sharing one layer for the decoders.", "The BLEU scores of English-to-German, English-to-French and Chinese-to-English translation tasks are reported in figure 2.", "Each curve corresponds to a different translation task and the x-axis denotes the number of weight-sharing layers for the AEs.", "We find that the number of weight-sharing layers shows much effect on the translation performance.", "And the best translation performance is achieved when only one layer is shared in our system.", "When all of the four layers are shared, i.e., only one shared encoder is utilized, we get poor translation performance in all of the three translation tasks.", "This verifies our conjecture that the shared encoder is detrimental to the performance of unsupervised NMT especially for the translation tasks on distant language pairs.", "More concretely, for the related language pair translation, i.e., English-to-French, the encoder-shared model achieves -0.53 BLEU points decline than the best model where only one layer is shared.", "For the more distant language pair English-to-German, the encoder-shared model achieves more significant decline, i.e., -0.85 BLEU points decline.", "And for the most distant language pair Chinese-to-English, the decline is as large as -1.66 BLEU points.", "We explain this as that the more distant the language pair is, the more different characteristics they have.", "And the shared encoder is weak in keeping the unique characteristic of each language.", "Additionally, we also notice that using two completely independent encoders, i.e., setting the number of weight-sharing layers as 0, results in poor translation performance too.", "This confirms our intuition that the shared layers are vital to map the source and target latent representations to a shared-latent space.", "In the rest of our experiments, we set the number of weightsharing layer as 1. tively learns to use the context information and the internal structure of each language.", "Compared to the work of , our model also achieves up to +1.92 BLEU points improvement on English-to-French translation task.", "We believe that the unsupervised NMT is very promising.", "However, there is still a large room for improvement compared to the supervised upper bound.", "The gap between the supervised and unsupervised model is as large as 12.3-25.5 BLEU points depending on the language pair and translation direction.", "Translation results Ablation study To understand the importance of different components of the proposed system, we perform an ablation study by training multiple versions of our model with some missing components: the local GANs, the global GANs, the directional self-attention, the weight-sharing, the embeddingreinforced encoders, etc.", "Results are reported in table 3.", "We do not test the the importance of the auto-encoding, back-translation and the pretrained embeddings because they have been widely tested in Artetxe et al., 2017b) .", "Table 3 shows that the best performance is obtained with the simultaneous use of all the tested elements.", "The most critical component is the weight-sharing constraint, which is vital to map sentences of different languages to the sharedlatent space.", "The embedding-reinforced encoder also brings some improvement on all of the translation tasks.", "When we remove the directional selfattention, we get up to -0.3 BLEU points decline.", "This indicates that it deserves more efforts to investigate the temporal order information in selfattention mechanism.", "The GANs also significantly improve the translation performance of our system.", "Specifically, the global GANs achieve improvement up to +0.78 BLEU points on English-to-French translation and the local GANs also obtain improvement up to +0.57 BLEU points on English-to-French translation.", "This reveals that the proposed model benefits a lot from the crossdomain loss defined by GANs.", "Conclusion and Future work The models proposed recently for unsupervised N-MT use a single encoder to map sentences from different languages to a shared-latent space.", "We conjecture that the shared encoder is problematic for keeping the unique and inherent characteristic of each language.", "In this paper, we propose the weight-sharing constraint in unsupervised NMT to address this issue.", "To enhance the cross-language translation performance, we also propose the embedding-reinforced encoders, local GAN and global GAN into the proposed system.", "Additionally, the directional self-attention is introduced to model the temporal order information for our system.", "We test the proposed model on English-German, English-French and Chinese-to-English translation tasks.", "The experimental results reveal that our approach achieves significant improvement and verify our conjecture that the shared encoder is really a bottleneck for improving the unsupervised NMT.", "The ablation study shows that each component of our system achieves some improvement for the final translation performance.", "Unsupervised NMT opens exciting opportunities for the future research.", "However, there is still a large room for improvement compared to the supervised NMT.", "In the future, we would like to investigate how to utilize the monolingual data more effectively, such as incorporating the language model and syntactic information into unsupervised NMT.", "Besides, we decide to make more efforts to explore how to reinforce the temporal or-der information for the proposed model." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "4.4.1", "4.4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Model Architecture", "Unsupervised Training", "Experiments and Results", "Data Sets and Preprocessing", "Model Hyper-parameters and Evaluation", "Baseline Systems", "Number of weight-sharing layers", "Ablation study", "Conclusion and Future work" ] }
GEM-SciDuet-train-108#paper-1285#slide-8
Future work
Continuing testing the unsupervised NMT and seeking to find its optimal configurations. Testing the performance of semi-supervised NMT with a little amount of bilingual data. Investigating more effective approach for utilizing the monolingual data in the framework of unsupervised NMT.
Continuing testing the unsupervised NMT and seeking to find its optimal configurations. Testing the performance of semi-supervised NMT with a little amount of bilingual data. Investigating more effective approach for utilizing the monolingual data in the framework of unsupervised NMT.
[]
GEM-SciDuet-train-109#paper-1286#slide-0
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-0
Task Semantic Parsing
Translate natural language sentences to meaning representations, e.g., logical forms. SentenceWhich city was Barack Obama born in Logical form. () (Barack_Obama)
Translate natural language sentences to meaning representations, e.g., logical forms. SentenceWhich city was Barack Obama born in Logical form. () (Barack_Obama)
[]
GEM-SciDuet-train-109#paper-1286#slide-1
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-1
Two Lines of Work in Semantic Parsing
use semantic graphs to represent sentence meanings, no need for lexicons and grammars Two Two Lines Lines of of Work Work in in Semantic Semantic Parsing Parsing Semantic parsing as semantic graph matching or staged semantic query graph generation Semantic Graph Based Sequence-to-Sequence Based Semantic parsing as a sequence-to-sequence problem [Bast and Haussmann, [Rabinovich et al., 2017] Hard to model semantic graph construction process Hard to capture structure information Ignore the relatedness to KB
use semantic graphs to represent sentence meanings, no need for lexicons and grammars Two Two Lines Lines of of Work Work in in Semantic Semantic Parsing Parsing Semantic parsing as semantic graph matching or staged semantic query graph generation Semantic Graph Based Sequence-to-Sequence Based Semantic parsing as a sequence-to-sequence problem [Bast and Haussmann, [Rabinovich et al., 2017] Hard to model semantic graph construction process Hard to capture structure information Ignore the relatedness to KB
[]
GEM-SciDuet-train-109#paper-1286#slide-2
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-2
Seq2Act synthesizes their advantages
Use semantic graphs to represent sentence meanings tight-coupling with knowledge bases Leverage the powerful prediction ability of RNN models Seq2Act: Seq2Act: end-to-end end-to-end semantic semantic graph graph generation generation Which states border Texas?
Use semantic graphs to represent sentence meanings tight-coupling with knowledge bases Leverage the powerful prediction ability of RNN models Seq2Act: Seq2Act: end-to-end end-to-end semantic semantic graph graph generation generation Which states border Texas?
[]
GEM-SciDuet-train-109#paper-1286#slide-3
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-3
Seq2Act end to end semantic graph generation
type return A state next_to semantic graph Action 3: add node texas:st Which states border Texas? Sequence-to-Action Action 1: add node A translate Action 2: add type state Action 4: add edge next_to
type return A state next_to semantic graph Action 3: add node texas:st Which states border Texas? Sequence-to-Action Action 1: add node A translate Action 2: add type state Action 4: add edge next_to
[]
GEM-SciDuet-train-109#paper-1286#slide-4
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-4
Overview of Our Method
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st type return A state next_to KB Semantic
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st type return A state next_to KB Semantic
[]
GEM-SciDuet-train-109#paper-1286#slide-5
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-5
Major components of Our Model
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st type return A state next_to KB Semantic
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st type return A state next_to KB Semantic
[]
GEM-SciDuet-train-109#paper-1286#slide-6
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-6
Major components of Our Model 1
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to KB Semantic
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to KB Semantic
[]
GEM-SciDuet-train-109#paper-1286#slide-7
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-7
Major components of Our Model 2
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to KB Semantic
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to KB Semantic
[]
GEM-SciDuet-train-109#paper-1286#slide-8
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-8
Major components of Our Model 3
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to KB Semantic
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to KB Semantic
[]
GEM-SciDuet-train-109#paper-1286#slide-9
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-9
Action Set
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to Define atom actions involved in semantic graph construction Node: A (variable), texas:st (entity), state (type) Sentence: Which river runs through the most states? Add entity node traverse most E.g., texas:st type type Add type node river state E.g., state Action Sequence: Add edge add_operation most E.g., next_to add_variable A Operation action add_variable B E.g., argmax, argmin, count add_type state B add_edge traverse A, B Argument action end_operation most A, B return A For type node, edge and operation
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state next_to Define atom actions involved in semantic graph construction Node: A (variable), texas:st (entity), state (type) Sentence: Which river runs through the most states? Add entity node traverse most E.g., texas:st type type Add type node river state E.g., state Action Sequence: Add edge add_operation most E.g., next_to add_variable A Operation action add_variable B E.g., argmax, argmin, count add_type state B add_edge traverse A, B Argument action end_operation most A, B return A For type node, edge and operation
[]
GEM-SciDuet-train-109#paper-1286#slide-10
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-10
Encoder Decoder Model
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state Typical encoder-decoder model (bi-LSTM with attention)
Sentence Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state Typical encoder-decoder model (bi-LSTM with attention)
[]
GEM-SciDuet-train-109#paper-1286#slide-11
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-11
Action Embedding
Action Action Embedding Embedding Structure part Semantic part
Action Action Embedding Embedding Structure part Semantic part
[]
GEM-SciDuet-train-109#paper-1286#slide-12
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-12
Structure and Semantic Constraints
Sentence: Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state Ensure action sequence will form a connected acyclic graph Ensure the constructed graph must follow the schema of knowledge bases Partial Semantic Graph: type Structure Semantic Arg Validity add_variable A Generated add_type state A Actions add_entity texas:st add_type city texas:st Action 1: violate type conflict Candidate add_edge loc A, texas:st Action 2: violate selectional preference constraint Next add_edge next_to A, A Action add_edge next_to A, texas:st Action 3: structure constraint
Sentence: Which states border Texas? RNN Model arg_node: A Constraints Generate add_entity: texas:st add_edge: next_to Action set type return A state Ensure action sequence will form a connected acyclic graph Ensure the constructed graph must follow the schema of knowledge bases Partial Semantic Graph: type Structure Semantic Arg Validity add_variable A Generated add_type state A Actions add_entity texas:st add_type city texas:st Action 1: violate type conflict Candidate add_edge loc A, texas:st Action 2: violate selectional preference constraint Next add_edge next_to A, A Action add_edge next_to A, texas:st Action 3: structure constraint
[]
GEM-SciDuet-train-109#paper-1286#slide-13
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-13
Experiments
We generate the action sequences from logical forms automatically. what is the population of illinois ?
We generate the action sequences from logical forms automatically. what is the population of illinois ?
[]
GEM-SciDuet-train-109#paper-1286#slide-14
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-14
Baselines
Zettlemoyer and Collins, 2005
Zettlemoyer and Collins, 2005
[]
GEM-SciDuet-train-109#paper-1286#slide-15
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-15
Competitive performance on three datasets
ATIS [Rabinovich et al., [Rabinovich et al., Need to design SO TA without extra SOTA Our full model resources GEO [Liang et al., [zhao et al.,
ATIS [Rabinovich et al., [Rabinovich et al., Need to design SO TA without extra SOTA Our full model resources GEO [Liang et al., [zhao et al.,
[]
GEM-SciDuet-train-109#paper-1286#slide-16
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-16
Seq2Act outperforms Seq2Seq
Seq2 Seq SOTA without Se q2Act e xtra resources
Seq2 Seq SOTA without Se q2Act e xtra resources
[]
GEM-SciDuet-train-109#paper-1286#slide-19
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-19
Average Length of Logical Forms and Action Sequences
Average len of logical forms Average len of action sequences
Average len of logical forms Average len of action sequences
[]
GEM-SciDuet-train-109#paper-1286#slide-20
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-20
Error Analysis
Iowa borders how many states? (Formal Form: How many states Please show me first class flights from indianapolis to memphis one way leaving before 10am
Iowa borders how many states? (Formal Form: How many states Please show me first class flights from indianapolis to memphis one way leaving before 10am
[]
GEM-SciDuet-train-109#paper-1286#slide-21
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-21
Conclusion
Sequence-to-Action: End-to-End Semantic Graph Generation Representation ability of semantic graphs Sequence prediction ability of RNN models Achieve competitive results on GEO, ATIS and OVERNIGHT
Sequence-to-Action: End-to-End Semantic Graph Generation Representation ability of semantic graphs Sequence prediction ability of RNN models Achieve competitive results on GEO, ATIS and OVERNIGHT
[]
GEM-SciDuet-train-109#paper-1286#slide-22
1286
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language sentences to logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008; Kwiatkowski et al., 2013) .", "For example, the sentence \"Which states border Texas?\"", "will be mapped to answer (A, (state (A), next to (A, stateid ( texas )))).", "A semantic parser needs two functions, one for structure prediction and the other for semantic grounding.", "Traditional semantic parsers are usually based on compositional grammar, such as CCG Collins, 2005, 2007) , DCS (Liang et al., 2011) , etc.", "These parsers compose structure using manually designed grammars, use lexicons for semantic grounding, and exploit fea- tures for candidate logical forms ranking.", "Unfortunately, it is challenging to design grammars and learn accurate lexicons, especially in wideopen domains.", "Moreover, it is often hard to design effective features, and its learning process is not end-to-end.", "To resolve the above problems, two promising lines of work have been proposed: Semantic graph-based methods and Seq2Seq methods.", "Semantic graph-based methods (Reddy et al., 2014 (Reddy et al., , 2016 Bast and Haussmann, 2015; Yih et al., 2015) represent the meaning of a sentence as a semantic graph (i.e., a sub-graph of a knowledge base, see example in Figure 1 ) and treat semantic parsing as a semantic graph matching/generation process.", "Compared with logical forms, semantic graphs have a tight-coupling with knowledge bases (Yih et al., 2015) , and share many commonalities with syntactic structures (Reddy et al., 2014) .", "Therefore both the structure and semantic constraints from knowledge bases can be easily exploited during parsing (Yih et al., 2015) .", "The main challenge of semantic graph-based parsing is how to effectively construct the semantic graph of a sentence.", "Currently, semantic graphs are either constructed by matching with patterns (Bast and Haussmann, 2015) , transforming from dependency tree (Reddy et al., 2014 (Reddy et al., , 2016 , or via a staged heuristic search algorithm (Yih et al., 2015) .", "These methods are all based on manuallydesigned, heuristic construction processes, making them hard to handle open/complex situations.", "In recent years, RNN models have achieved success in sequence-to-sequence problems due to its strong ability on both representation learning and prediction, e.g., in machine translation .", "A lot of Seq2Seq models have also been employed for semantic parsing (Xiao et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2016) , where a sentence is parsed by translating it to linearized logical form using RNN models.", "There is no need for high-quality lexicons, manually-built grammars, and hand-crafted features.", "These models are trained end-to-end, and can leverage attention mechanism Luong et al., 2015) to learn soft alignments between sentences and logical forms.", "In this paper, we propose a new neural semantic parsing framework -Sequence-to-Action, which can simultaneously leverage the advantages of semantic graph representation and the strong prediction ability of Seq2Seq models.", "Specifically, we model semantic parsing as an end-to-end semantic graph generation process.", "For example in Figure 1 , our model will parse the sentence \"Which states border Texas\" by generating a sequence of actions [add variable:A, add type:state, ...].", "To achieve the above goal, we first design an action set which can encode the generation process of semantic graph (including node actions such as add variable, add entity, add type, edge actions such as add edge, and operation actions such as argmin, argmax, count, sum, etc.).", "And then we design a RNN model which can generate the action sequence for constructing the semantic graph of a sentence.", "Finally we further enhance parsing by incorporating both structure and semantic constraints during decoding.", "Compared with the manually-designed, heuristic generation algorithms used in traditional semantic graph-based methods, our sequence-toaction method generates semantic graphs using a RNN model, which is learned end-to-end from training data.", "Such a learnable, end-to-end generation makes our approach more effective and can fit to different situations.", "Compared with the previous Seq2Seq semantic parsing methods, our sequence-to-action model predicts a sequence of semantic graph generation actions, rather than linearized logical forms.", "We find that the action sequence encoding can better capture structure and semantic information, and is more compact.", "And the parsing can be enhanced by exploiting structure and semantic constraints.", "For example, in GEO dataset, the action add edge:next to must subject to the semantic constraint that its arguments must be of type state and state, and the structure constraint that the edge next to must connect two nodes to form a valid graph.", "We evaluate our approach on three standard datasets: GEO (Zelle and Mooney, 1996) , ATIS (He and Young, 2005) and OVERNIGHT (Wang et al., 2015b) .", "The results show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.", "The main contributions of this paper are summarized as follows: • We propose a new semantic parsing framework -Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process.", "This new framework can synthesize the advantages of semantic graph representation and the prediction ability of Seq2Seq models.", "• We design a sequence-to-action model, including an action set encoding for semantic graph generation and a Seq2Seq RNN model for action sequence prediction.", "We further enhance the parsing by exploiting structure and semantic constraints during decoding.", "Experiments validate the effectiveness of our method.", "2 Sequence-to-Action Model for End-to-End Semantic Graph Generation Given a sentence X = x 1 , ..., x |X| , our sequenceto-action model generates a sequence of actions Y = y 1 , ..., y |Y | for constructing the correct semantic graph.", "Figure 2 shows an example.", "The conditional probability P (Y |X) used in our Figure 2 : An example of a sentence paired with its semantic graph, together with the action sequence for semantic graph generation.", "model is decomposed as follows: P (Y |X) = |Y | t=1 P (y t |y <t , X) (1) where y <t = y 1 , ..., y t−1 .", "To achieve the above goal, we need: 1) an action set which can encode semantic graph generation process; 2) an encoder which encodes natural language input X into a vector representation, and a decoder which generates y 1 , ..., y |Y | conditioned on the encoding vector.", "In following we describe them in detail.", "Actions for Semantic Graph Generation Generally, a semantic graph consists of nodes (including variables, entities, types) and edges (semantic relations), with some universal operations (e.g., argmax, argmin, count, sum, and not).", "To generate a semantic graph, we define six types of actions as follows: Add Variable Node: This kind of actions denotes adding a variable node to semantic graph.", "In most cases a variable node is a return node (e.g., which, what), but can also be an intermediate variable node.", "We represent this kind of action as add variable:A, where A is the identifier of the variable node.", "Add Entity Node: This kind of actions denotes adding an entity node (e.g., Texas, New York) and is represented as add entity node:texas.", "An entity node corresponds to an entity in knowledge bases.", "Add Type Node: This kind of actions denotes adding a type node (e.g., state, city).", "We represent them as add type node:state.", "Add Edge: This kind of actions denotes adding an edge between two nodes.", "An edge is a binary relation in knowledge bases.", "This kind of actions is represented as add edge:next to.", "Operation Action: This kind of actions denotes adding an operation.", "An operation can be argmax, argmin, count, sum, not, et al.", "Because each operation has a scope, we define two actions for an operation, one is operation start action, represented as start operation:most, and the other is operation end action, represented as end operation:most.", "The subgraph within the start and end operation actions is its scope.", "Argument Action: Some above actions need argument information.", "For example, which nodes the add edge:next to action should connect to.", "In this paper, we design argument actions for add type, add edge and operation actions, and the argument actions should be put directly after its main action.", "For add type actions, we put an argument action to indicate which node this type node should constrain.", "The argument can be a variable node or an entity node.", "An argument action for a type node is represented as arg:A.", "For add edge action, we use two argument actions: arg1 node and arg2 node, and they are represented as arg1 node:A and arg2 node:B.", "We design argument actions for different operations.", "For operation:sum, there are three arguments: arg-for, arg-in and arg-return.", "For operation:count, they are arg-for and arg-return.", "There are two arg-for arguments for operation:most.", "We can see that each action encodes both structure and semantic information, which makes it easy to capture more information for parsing and can be tightly coupled with knowledge base.", "Furthermore, we find that action sequence encoding is more compact than linearized logical form (See Section 4.4 for more details).", "Figure 3 : Our attention-based Sequence-to-Action RNN model, with a controller for incorporating constraints.", "Neural Sequence-to-Action Model Based on the above action encoding mechanism, this section describes our encoder-decoder model for mapping sentence to action sequence.", "Specifically, similar to the RNN model in Jia and Liang (2016) , this paper employs the attentionbased sequence-to-sequence RNN model.", "Figure 3 presents the overall structure.", "Encoder: The encoder converts the input sequence x 1 , ..., x m to a sequence of contextsensitive vectors b 1 , ..., b m using a bidirectional RNN .", "Firstly each word x i is mapped to its embedding vector, then these vectors are fed into a forward RNN and a backward RNN.", "The sequence of hidden states h 1 , ..., h m are generated by recurrently applying the recurrence: h i = LST M (φ (x) (x i ), h i−1 ).", "(2) The recurrence takes the form of LSTM (Hochreiter and Schmidhuber, 1997).", "Finally, for each input position i, we define its context-sensitive embedding as b i = [h F i , h B i ] .", "Decoder: This paper uses the classical attentionbased decoder , which generates action sequence y 1 , ..., y n , one action at a time.", "At each time step j, it writes y j based on the current hidden state s j , then updates the hidden state to s j+1 based on s j and y j .", "The decoder is formally defined by the following equations: s 1 = tanh(W (s) [h F m , h B 1 ]) (3) e ji = s T j W (a) b i (4) a ji = exp(e ji ) m i =1 exp(e ji ) (5) c j = m i=1 a ji b i (6) P (y j = w|x, y 1:j−1 ) ∝ exp(U w [s j , c j ]) (7) s j+1 = LST M ([φ (y) (y j ), c j ], s j ) (8) where the normalized attention scores a ji defines the probability distribution over input words, indicating the attention probability on input word i at time j; e ji is un-normalized attention score.", "To incorporate constraints during decoding, an extra controller component is added and its details will be described in Section 3.3.", "Action Embedding.", "The above decoder needs the embedding of each action.", "As described above, each action has two parts, one for structure (e.g., add edge), and the other for semantic (e.g., next to).", "As a result, actions may share the same structure or semantic part, e.g., add edge:next to and add edge:loc have the same structure part, and add node:A and arg node:A have the same semantic part.", "To make parameters more compact, we first embed the structure part and the semantic part independently, then concatenate them to get the final embedding.", "For in- 3 Constrained Semantic Parsing using Sequence-to-Action Model stance, φ (y) (add edge:next to ) = [ φ (y) strut ( add edge ), φ In this section, we describe how to build a neural semantic parser using sequence-to-action model.", "We first describe the training and the inference of our model, and then introduce how to incorporate structure and semantic constraints during decoding.", "Training Parameter Estimation.", "The parameters of our model include RNN parameters W (s) , W (a) , U w , word embeddings φ (x) , and action embeddings φ (y) .", "We estimate these parameters from training data.", "Given a training example with a sentence X and its action sequence Y , we maximize the likelihood of the generated sequence of actions given X.", "The objective function is: n i=1 log P (Y i |X i ) (9) Standard stochastic gradient descent algorithm is employed to update parameters.", "Logical Form to Action Sequence.", "Currently, most datasets of semantic parsing are labeled with logical forms.", "In order to train our model, we convert logical forms to action sequences using semantic graph as an intermediate representation (See Figure 4 for an overview).", "Concretely, we transform logical forms into semantic graphs using a depth-first-search algorithm from root, and then generate the action sequence using the same order.", "Specifically, entities, variables and types are nodes; relations are edges.", "Conversely we can convert action sequence to logical form similarly.", "Based on the above algorithm, action sequences can be transformed into logical forms in a deterministic way, and the same for logical forms to action sequences.", "Mechanisms for Handling Entities.", "Entities play an important role in semantic parsing (Yih et al., 2015) .", "In Dong and Lapata (2016) , entities are replaced with their types and unique IDs.", "In Jia and Liang (2016) , entities are generated via attention-based copying mechanism helped with a lexicon.", "This paper implements both mechanisms and compares them in experiments.", "Inference Given a new sentence X, we predict action sequence by: Y * = argmax Y P (Y |X) (10) where Y represents action sequence, and P (Y |X) is computed using Formula (1).", "Beam search is used for best action sequence decoding.", "Semantic graph and logical form can be derived from Y * as described in above.", "Incorporating Constraints in Decoding For decoding, we generate action sequentially.", "It is obviously that the next action has a strong correlation with the partial semantic graph generated to current, and illegal actions can be filtered using structure and semantic constraints.", "Specifically, we incorporate constraints in decoding using a controller.", "This procedure has two steps: 1) the controller constructs partial semantic graph using the actions generated to current; 2) the controller checks whether a new generated action can meet Figure 5 : A demonstration of illegal action filtering using constraints.", "The graph in color is the constructed semantic graph to current.", "all structure/semantic constraints using the partial semantic graph.", "Structure Constraints.", "The structure constraints ensure action sequence will form a connected acyclic graph.", "For example, there must be two argument nodes for an edge, and the two argument nodes should be different (The third candidate next action in Figure 5 violates this constraint).", "This kind of constraints are domain-independent.", "The controller encodes structure constraints as a set of rules.", "Semantic Constraints.", "The semantic constraints ensure the constructed graph must follow the schema of knowledge bases.", "Specifically, we model two types of semantic constraints.", "One is selectional preference constraints where the argument types of a relation should follow knowledge base schemas.", "For example, in GEO dataset, relation next to's arg1 and arg2 should both be a state.", "The second is type conflict constraints, i.e., an entity/variable node's type must be consistent, i.e., a node cannot be both of type city and state.", "Semantic constraints are domain-specific and are automatically extracted from knowledge base schemas.", "The controller encodes semantic constraints as a set of rules.", "Experiments In this section, we assess the performance of our method and compare it with previous methods.", "Datasets We conduct experiments on three standard datasets: GEO, ATIS and OVERNIGHT.", "GEO contains natural language questions about US geography paired with corresponding Prolog database queries.", "Following Zettlemoyer and Collins (2005) , we use the standard 600/280 instance splits for training/test.", "ATIS contains natural language questions of a flight database, with each question is annotated with a lambda calculus query.", "Following Zettlemoyer and Collins (2007) , we use the standard 4473/448 instance splits for training/test.", "OVERNIGHT contains natural language paraphrases paired with logical forms across eight domains.", "We evaluate on the standard train/test splits as Wang et al.", "(2015b) .", "Experimental Settings Following the experimental setup of Jia and Liang (2016) : we use 200 hidden units and 100dimensional word vectors for sentence encoding.", "The dimensions of action embedding are tuned on validation datasets for each corpus.", "We initialize all parameters by uniformly sampling within the interval [-0.1, 0.1].", "We train our model for a total of 30 epochs with an initial learning rate of 0.1, and halve the learning rate every 5 epochs after epoch 15.", "We replace word vectors for words occurring only once with an universal word vector.", "The beam size is set as 5.", "Our model is implemented in Theano (Bergstra et al., 2010) , and the codes and settings are released on Github: https://github.com/dongpobeyond/Seq2Act.", "We evaluate different systems using the standard accuracy metric, and the accuracies on different datasets are obtained as same as Jia and Liang (2016) .", "Overall Results We compare our method with state-of-the-art systems on all three datasets.", "Because all systems using the same training/test splits, we directly use the reported best performances from their original papers for fair comparison.", "For our method, we train our model with three settings: the first one is the basic sequence-toaction model without constraints -Seq2Act; the second one adds structure constraints in decoding -Seq2Act (+C1); the third one is the full model which adds both structure and semantic GEO ATIS Previous Work Zettlemoyer and Collins (2005) Kwiatkowksi et al.", "(2010) 88.9 - Kwiatkowski et al.", "(2011) 88.6 82.8 Liang et al.", "(2011)* (+lexicon) 91.1 -Poon (2013) -83.5 Zhao et al.", "(2015) 88.9 84.2 Rabinovich et al.", "(2017) 87.1 85.9 Seq2Seq Models Jia and Liang (2016) 85.0 76.3 Jia and Liang (2016) constraints -Seq2Act (+C1+C2).", "Semantic constraints (C2) are stricter than structure constraints (C1).", "Therefore we set that C1 should be first met for C2 to be met.", "So in our experiments we add constraints incrementally.", "The overall results are shown in Table 1 -2.", "From the overall results, we can see that: 1) By synthetizing the advantages of semantic graph representation and the prediction ability of Seq2Seq model, our method achieves stateof-the-art performance on OVERNIGHT dataset, and gets competitive performance on GEO and ATIS dataset.", "In fact, on GEO our full model (Seq2Act+C1+C2) also gets the best test accuracy of 88.9 if under the same settings, which only falls behind Liang et al.", "(2011) * which uses extra handcrafted lexicons and Jia and Liang (2016) * which uses extra augmented training data.", "On ATIS our full model gets the second best test accuracy of 85.5, which only falls behind Rabinovich et al.", "(2017) which uses a supervised attention strategy.", "On OVERNIGHT, our full model gets state-of-theart accuracy of 79.0, which even outperforms Jia and Liang (2016) * with extra augmented training data.", "2) Compared with the linearized logical form representation used in previous Seq2Seq baselines, our action sequence encoding is more effective for semantic parsing.", "On all three datasets, (2016) OVERNGIHT, the Seq2Act model gets a test accuracy of 78.0, better than the best Seq2Seq baseline gets 77.5.", "We argue that this is because our action sequence encoding is more compact and can capture more information.", "3) Structure constraints can enhance semantic parsing by ensuring the validity of graph using the generated action sequence.", "In all three datasets, Seq2Act (+C1) outperforms the basic Seq2Act model.", "This is because a part of illegal actions will be filtered during decoding.", "4) By leveraging knowledge base schemas during decoding, semantic constraints are effective for semantic parsing.", "Compared to Seq2Act and Seq2Act (+C1), the Seq2Act (+C1+C2) gets the best performance on all three datasets.", "This is because semantic constraints can further filter semantic illegal actions using selectional preference and consistency between types.", "Detailed Analysis Effect of Entity Handling Mechanisms.", "This paper implements two entity handling mechanisms -Replacing (Dong and Lapata, 2016) which identifies entities and then replaces them with their types and IDs, and attention-based Copying (Jia and Liang, 2016) .", "To compare the above two mechanisms, we train and test with our full model and the results are shown in Table 3 .", "We can see that, Replacing mechanism outperforms Copying in all three datasets.", "This is because Replacing is done in preprocessing, while attention-based Copying is done during parsing and needs additional copy mechanism.", "Linearized Logical Form vs. Action Sequence.", "Table 4 shows the average length of linearized logical forms used in previous Seq2Seq models and the action sequences of our model on all three datasets.", "As we can see, action sequence encoding is more compact than linearized logical form encoding: action sequence is shorter on all three datasets, 35.5%, 9.2% and 28.5% reduction in length respectively.", "The main advantage of a shorter/compact encoding is that it will reduce the influence of long distance dependency problem.", "Error Analysis We perform error analysis on results and find there are mainly two types of errors.", "Unseen/Informal Sentence Structure.", "Some test sentences have unseen syntactic structures.", "For example, the first case in Table 5 has an unseen Gold Parse: answer(A, count (B, (const (C, stateid(iowa) ), next to(C, B), state (B)), A)) Predicted Parse: answer (A, count(B, state(B), A)) Under-Mapping Sentence: Please show me first class flights from indianapolis to memphis one way leaving before 10am Gold Parse: (lambda x (and (flight x) (oneway x) (class type x first:cl) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Predicted Parse: (lambda x (and (flight x) (oneway x) (< (departure time x) 1000:ti) (from x indianapolis:ci) (to x memphis:ci))) Table 5 : Some examples for error analysis.", "Each example includes the sentence for parsing, with gold parse and predicted parse from our model.", "and informal structure, where entity word \"Iowa\" and relation word \"borders\" appear ahead of the question words \"how many\".", "For this problem, we can employ sentence rewriting or paraphrasing techniques (Chen et al., 2016; Dong et al., 2017) to transform unseen sentence structures into normal ones.", "Under-Mapping.", "As Dong and Lapata (2016) discussed, the attention model does not take the alignment history into consideration, makes some words are ignored during parsing.", "For example in the second case in Table 5 , \"first class\" is ignored during the decoding process.", "This problem can be further solved using explicit word coverage models used in neural machine translation (Tu et al., 2016; Cohn et al., 2016) Related Work Semantic parsing has received significant attention for a long time (Kate and Mooney, 2006; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012; Berant and Liang, 2014; Quirk et al., 2015; Artzi et al., 2015; .", "Traditional methods are mostly based on the principle of compositional semantics, which first trigger predicates using lexicons and then compose them using grammars.", "The prominent grammars include SCFG (Wong and Mooney, 2007; Li et al., 2015) , CCG (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011; Cai and Yates, 2013) , DCS (Liang et al., 2011; Berant et al., 2013) , etc.", "As discussed above, the main drawback of grammar-based methods is that they rely on high-quality lexicons, manually-built grammars, and hand-crafted features.", "In recent years, one promising direction of semantic parsing is to use semantic graph as representation.", "Thus semantic parsing is modeled as a semantic graph generation process.", "Ge and Mooney (2009) build semantic graph by trans-forming syntactic tree.", "Bast and Haussmann (2015) identify the structure of a semantic query using three pre-defined patterns.", "Reddy et al.", "(2014 Reddy et al.", "( , 2016 use Freebase-based semantic graph representation, and convert sentences to semantic graphs using CCG or dependency tree.", "Yih et al.", "(2015) generate semantic graphs using a staged heuristic search algorithm.", "These methods are all based on manually-designed, heuristic generation process, which may suffer from syntactic parse errors (Ge and Mooney, 2009; Reddy et al., 2014 Reddy et al., , 2016 , structure mismatch (Chen et al., 2016) , and are hard to deal with complex sentences (Yih et al., 2015) .", "One other direction is to employ neural Seq2Seq models, which models semantic parsing as an end-to-end, sentence to logical form machine translation problem.", "Dong and Lapata (2016) , Jia and Liang (2016) and Xiao et al.", "(2016) transform word sequence to linearized logical forms.", "One main drawback of these methods is that it is hard to capture and exploit structure and semantic constraints using linearized logical forms.", "Dong and Lapata (2016) propose a Seq2Tree model to capture the hierarchical structure of logical forms.", "It has been shown that structure and semantic constraints are effective for enhancing semantic parsing.", "Krishnamurthy et al.", "(2017) use type constraints to filter illegal tokens.", "Liang et al.", "(2017) adopt a Lisp interpreter with pre-defined functions to produce valid tokens.", "Iyyer et al.", "(2017) adopt type constraints to generate valid actions.", "Inspired by these approaches, we also incorporate both structure and semantic constraints in our neural sequence-to-action model.", "Transition-based approaches are important in both dependency parsing (Nivre, 2008; Henderson et al., 2013) and AMR parsing (Wang et al., 2015a) .", "In semantic parsing, our method has a tight-coupling with knowledge bases, and con-straints can be exploited for more accurate decoding.", "We believe this can also be used to enhance previous transition based methods and may also be used in other parsing tasks, e.g., AMR parsing.", "Conclusions This paper proposes Sequence-to-Action, a method which models semantic parsing as an end-to-end semantic graph generation process.", "By leveraging the advantages of semantic graph representation and exploiting the representation learning and prediction ability of Seq2Seq models, our method achieved significant performance improvements on three datasets.", "Furthermore, structure and semantic constraints can be easily incorporated in decoding to enhance semantic parsing.", "For future work, to solve the problem of the lack of training data, we want to design weakly supervised learning algorithm using denotations (QA pairs) as supervision.", "Furthermore, we want to collect labeled data by designing an interactive UI for annotation assist like (Yih et al., 2016) , which uses semantic graphs to annotate the meaning of sentences, since semantic graph is more natural and can be easily annotated without the need of expert knowledge." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "6" ], "paper_header_content": [ "Introduction", "Actions for Semantic Graph Generation", "Neural Sequence-to-Action Model", "Training", "Inference", "Incorporating Constraints in Decoding", "Experiments", "Datasets", "Experimental Settings", "Overall Results", "Detailed Analysis", "Error Analysis", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-109#paper-1286#slide-22
Future work
Weak supervised learning algorithm for Seq2Act So our method can be applied to (q, a) pair datasets such as Apply Seq2Act model to other parsing tasks (e.g., AMR parsing)
Weak supervised learning algorithm for Seq2Act So our method can be applied to (q, a) pair datasets such as Apply Seq2Act model to other parsing tasks (e.g., AMR parsing)
[]
GEM-SciDuet-train-110#paper-1290#slide-0
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-0
This is NOT an architecture search paper
The Best of Both Worlds P 2
The Best of Both Worlds P 2
[]
GEM-SciDuet-train-110#paper-1290#slide-1
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-1
A Brief History of NMT Models
Bahdanau et al. Gehring et al. The Best of Both Worlds P 3
Bahdanau et al. Gehring et al. The Best of Both Worlds P 3
[]
GEM-SciDuet-train-110#paper-1290#slide-2
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-2
The Best of Both Worlds I
Each new approach is: accompanied by a set of modeling and training techniques. Tease apart architectures and their accompanying techniques. Identify key modeling and training techniques. Apply them on RNN based Seq2Seq RNMT+ RNMT+ outperforms all previous three approaches.
Each new approach is: accompanied by a set of modeling and training techniques. Tease apart architectures and their accompanying techniques. Identify key modeling and training techniques. Apply them on RNN based Seq2Seq RNMT+ RNMT+ outperforms all previous three approaches.
[]
GEM-SciDuet-train-110#paper-1290#slide-3
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-3
The Best of Both Worlds II
Also, each new approach has: a fundamental architecture (signature wiring of neural network). Analyse properties of each architecture. Devise new hybrid architectures Hybrids Hybrids obtain further improvements over all the others. The Best of Both W orlds P 5
Also, each new approach has: a fundamental architecture (signature wiring of neural network). Analyse properties of each architecture. Devise new hybrid architectures Hybrids Hybrids obtain further improvements over all the others. The Best of Both W orlds P 5
[]
GEM-SciDuet-train-110#paper-1290#slide-4
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-4
Building Blocks
RNN Based NMT - RNMT Convolutional NMT - ConvS2S Conditional Transformation Based NMT - Project name P 6
RNN Based NMT - RNMT Convolutional NMT - ConvS2S Conditional Transformation Based NMT - Project name P 6
[]
GEM-SciDuet-train-110#paper-1290#slide-5
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-5
GNMT Wu et al
The Best of Both Worlds P 7 *Figure from Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation Wu et al. 2016
The Best of Both Worlds P 7 *Figure from Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation Wu et al. 2016
[]
GEM-SciDuet-train-110#paper-1290#slide-6
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-6
ConvS2S Gehring et al
More interpretable than RNN Parallel decoder outputs during training Need to stack more to increase the receptive field P 8 *Figure from Convolutional Sequence to Sequence Learning Gehring et al. 2017
More interpretable than RNN Parallel decoder outputs during training Need to stack more to increase the receptive field P 8 *Figure from Convolutional Sequence to Sequence Learning Gehring et al. 2017
[]
GEM-SciDuet-train-110#paper-1290#slide-7
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-7
Transformer Vaswani et al
Gradients everywhere - faster optimization Parallel encoding both training/inference Cons: Combines many advances at once Fragile P 9 *Figure from Attention is All You Need Vaswani et al. 2017
Gradients everywhere - faster optimization Parallel encoding both training/inference Cons: Combines many advances at once Fragile P 9 *Figure from Attention is All You Need Vaswani et al. 2017
[]
GEM-SciDuet-train-110#paper-1290#slide-8
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-8
The Best of Both Worlds I RNMT
Bi-directional encoder 6 x LSTM Uni-directional decoder 8 x LSTM Layer normalized LSTM cell The Best of Both Worlds P 10
Bi-directional encoder 6 x LSTM Uni-directional decoder 8 x LSTM Layer normalized LSTM cell The Best of Both Worlds P 10
[]
GEM-SciDuet-train-110#paper-1290#slide-9
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-9
Model Comparison I BLEU Scores
The Best of Both Worlds P 11
The Best of Both Worlds P 11
[]
GEM-SciDuet-train-110#paper-1290#slide-10
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-10
Model Comparison II Speed and Size
The Best of Both Worlds P 12
The Best of Both Worlds P 12
[]
GEM-SciDuet-train-110#paper-1290#slide-11
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-11
Stability Ablations
Evaluate importance of four key techniques: Critical to stabilize training (especially with multi-head attention) * Indicates an unstable training run Synchronous training Significant quality drop for RNMT+ Successful only with a tailored learning-rate schedule The Best of Both Worlds P 13
Evaluate importance of four key techniques: Critical to stabilize training (especially with multi-head attention) * Indicates an unstable training run Synchronous training Significant quality drop for RNMT+ Successful only with a tailored learning-rate schedule The Best of Both Worlds P 13
[]
GEM-SciDuet-train-110#paper-1290#slide-12
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-12
The Best of Both Worlds II Hybrids
Strengths of each architecture: Highly expressive - continuous state space representation. Full receptive field - powerful feature extractor. Combining individual architecture strengths: Capture complementary information - Best of Both Worlds. Trainability - important concern with hybrids Connections between different types of layers need to be carefully designed. The Best of Both Worlds P 14
Strengths of each architecture: Highly expressive - continuous state space representation. Full receptive field - powerful feature extractor. Combining individual architecture strengths: Capture complementary information - Best of Both Worlds. Trainability - important concern with hybrids Connections between different types of layers need to be carefully designed. The Best of Both Worlds P 14
[]
GEM-SciDuet-train-110#paper-1290#slide-13
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-13
Encoder Decoder Hybrids
Decoder - conditional LM Encoder - build feature representations Designed to contrast the roles. The Best of Both Worlds P 15
Decoder - conditional LM Encoder - build feature representations Designed to contrast the roles. The Best of Both Worlds P 15
[]
GEM-SciDuet-train-110#paper-1290#slide-14
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-14
Encoder Layer Hybrids
Enrich stateful representations with global self-attention Pre-trained components to improve trainability Layer normalization at layer boundaries Cascaded Hybrid - vertical combination Multi-Column Hybrid - horizontal combination The Best of Both Worlds P 16
Enrich stateful representations with global self-attention Pre-trained components to improve trainability Layer normalization at layer boundaries Cascaded Hybrid - vertical combination Multi-Column Hybrid - horizontal combination The Best of Both Worlds P 16
[]
GEM-SciDuet-train-110#paper-1290#slide-15
1290
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm.", "In the first architectures that surpassed * Equal contribution.", "the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015) .", "The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g.", "Baidu (Zhou et al., 2016) , Google (Wu et al., 2016) , and Systran (Crego et al., 2016) .", "Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices.", "such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017) .", "Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017 ).", "The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.", "Most recently, the Transformer model (Vaswani et al., 2017) , which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.", "In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert.", "This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence.", "This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques.", "In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper.", "Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.", "In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models.", "In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup.", "We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer.", "In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.", "Our contributions are three-fold: We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT).", "In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality.", "In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.", "Background In this section, we briefly discuss the commmonly used NMT architectures.", "RNN-based NMT Models -RNMT RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network.", "The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.", "The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) , and stacked decoders with unidirectional RNNs.", "Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) or GRU units (Cho et al., 2014) , and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.", "In Google-NMT (GNMT) (Wu et al., 2016) , the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers.", "The decoder is equipped with a single attention network and 8 uni-directional LSTM layers.", "Both the encoder and the decoder use residual skip connections between consecutive layers.", "In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture.", "Convolutional NMT Models -ConvS2S In the most successful convolutional sequence-tosequence model (Gehring et al., 2017) , both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016) .", "Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs.", "Positional embeddings are used to provide explicit positional information to the model.", "Following the practice in (Gehring et al., 2017) , we scale the gradients of the encoder layers to stabilize training.", "We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence.", "We follow the public ConvS2S codebase 1 in our experiments.", "Conditional Transformation-based NMT Models -Transformer The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families: (1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.", "(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer.", "The Transformer model still follows the encoder-decoder paradigm.", "Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network.", "Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.", "There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e.", "self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.", "(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.", "In this paper, we follow the latest version of the 1 https://github.com/facebookresearch/fairseq-py Transformer model in the Tensor2Tensor 2 codebase.", "A Theory-Based Characterization of NMT Architectures From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 3 .", "Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990) , especially natural language (Grefenstette et al., 2015) effectively.", "In practice, RNNs are notoriously hard to train (Hochreiter, 1991; Bengio et al., 1994; Hochreiter et al., 2001) , confirming the well known dilemma of trainability versus expressivity.", "Convolutional layers are adept at capturing local context and local correlations by design.", "A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow.", "In practice, this weakness is mitigated by stacking more convolutional layers (e.g.", "15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.", "The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989) , and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.", "On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.", "sinusoidal positional encodings).", "Above theoretical characterizations will drive our explorations in the following sections.", "Experiment Setup We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively.", "Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as \"wordpieces\") using the approach described in (Schuster and Nakajima, 2012) .", "At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated.", "On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention.", "The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.", "We use a shared vocabulary of 32K sub-word units for each source-target language pair.", "No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets.", "We report all our results on newstest 2014, which serves as the test set.", "A combination of newstest 2012 and newstest 2013 is used for validation.", "To evaluate the models, we compute the BLEU metric on tokenized, true-case output.", "4 For each training run, we evaluate the model every 30 minutes on the dev set.", "Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations.", "We report the mean test score and standard deviation over the selected window.", "This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.", "To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments.", "We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015; Chen et al., 2017) to focus on evaluating the performance of individual models.", "RNMT+ Model Architecture of RNMT+ The newly proposed RNMT+ model architecture is shown in Figure 1 .", "Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model.", "There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT.", "For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer.", "The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model.", "Residual connections are added to the third layer and above for both the encoder and decoder.", "Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell.", "Our empirical results show that layer normalization greatly stabilizes training.", "No non-linearity is applied to the LSTM output.", "A projection layer is added to the encoder final output.", "5 Multi-head additive attention is used instead of the single-head attention in the GNMT model.", "Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context.", "In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax by concatenating it with the layer input.", "This is important for both the quality of the models with multi-head attention and the stability of the training process.", "Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training.", "We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.", "We apply the following regularization techniques during training.", "• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input.", "Attention dropout is also applied.", "• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015) .", "Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention.", "Similar to the observations in (Chorowski and Jaitly, 2016) , we found it beneficial to use a larger beam size (e.g.", "16, 20, etc.)", "during decoding when models are trained with label smoothing.", "• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 .", "Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.", "We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule: lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s (1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay.", "Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends.", "This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017) .", "In contrast to the asynchronous training used for GNMT (Dean et al., 2012) , we train RNMT+ models with synchronous training .", "Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.", "To further stabilize training, we also use adaptive gradient clipping.", "We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion.", "More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.", "Model Analysis and Comparison In this section, we compare the results of RNMT+ with ConvS2S and Transformer.", "All models were trained with synchronous training.", "RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.", "For RNMT+, we use sentence-level crossentropy loss.", "Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).", "For ConvS2S and Transformer models, we use token-level cross-entropy loss.", "Each training batch contained 65536 source tokens and 65536 target tokens.", "For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task.", "The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.", "RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49.", "In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task.", "Table 3 summarizes training performance and model statistics.", "The Transformer Base model 6 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De.", "7 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.", "2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.", "3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.", "We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.", "4) In (Vaswani et al., 2017) , reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.", "Model Test Ablation Experiments In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models.", "We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance.", "We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently.", "By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance?", "(2) From Table 4 we draw the following conclusions about the four techniques: • Label Smoothing We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.", "• Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.", "• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used.", "Removing layer normalization results in unstable training runs for both models.", "Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case.", "To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.", "Hybrid NMT Models In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family.", "These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.", "Assessing Individual Encoders and Decoders In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history.", "Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.", "We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations.", "We start by combining the encoder and decoder from different model families.", "Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder.", "From Table 5 , it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful de-coder is beneficial for conditional language generation.", "Assessing Encoder Combinations Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information.", "Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.", "We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5 .", "We study two mixing schemes in the encoder (see Fig.", "2 ): (1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention.", "The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013; Devlin, 2017) .", "Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder.", "Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity.", "As shown in Table 6 , the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task.", "This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.", "(2) Multi-Column Encoder: As illustrated in Fig.", "2b , a multi-column encoder merges the outputs of several independent encoders into a single combined representation.", "Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination.", "A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation.", "Our best multi-column encoder performs a simple concatenation of individual column outputs.", "The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6.", "As shown in Table 6 , the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.", "28.84 ± 0.06 Table 6 : Results for hybrids with cascaded encoder and multi-column encoder.", "Conclusion In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT.", "We demonstrated that many of these techniques are broadly applicable to multiple model architectures.", "Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks.", "We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts.", "We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+.", "We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.", "Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua?", "Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes?", "How transferable are the representations learned by the different architectures to other tasks?", "And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?" ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "2.4", "3", "4.1", "4.2", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Background", "RNN-based NMT Models -RNMT", "Convolutional NMT Models -ConvS2S", "Conditional Transformation-based NMT Models -Transformer", "A Theory-Based Characterization of NMT Architectures", "Experiment Setup", "Model Architecture of RNMT+", "Model Analysis and Comparison", "Ablation Experiments", "Hybrid NMT Models", "Assessing Individual Encoders and Decoders", "Assessing Encoder Combinations", "Conclusion" ] }
GEM-SciDuet-train-110#paper-1290#slide-15
Lessons Learnt
Need to separate other improvements from the architecture itself: Your good ol architecture may shine with new modelling and training techniques Stronger baselines (Denkowski and Neubig, 2017) Dull Teachers - Smart Students A model with a sufficiently advanced lr-schedule is indistinguishable from magic. Hybrids have the potential, more than duct taping. Game is on for the next generation of NMT architectures The Best of Both Worlds P 18
Need to separate other improvements from the architecture itself: Your good ol architecture may shine with new modelling and training techniques Stronger baselines (Denkowski and Neubig, 2017) Dull Teachers - Smart Students A model with a sufficiently advanced lr-schedule is indistinguishable from magic. Hybrids have the potential, more than duct taping. Game is on for the next generation of NMT architectures The Best of Both Worlds P 18
[]
GEM-SciDuet-train-111#paper-1297#slide-0
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-0
Introduction
Raw unlabeled data points x1, x2, Classifier Oracle/Expert: Provides labels for queries At any time during the AL process, we have a current guess for the classifier AL Strategy: Query the point closest to the decision boundary Not clear whether heuristics lead to optimal querying behavior Not clear which hard coded heuristic is good for a task at hand Can we learn the best active learning strategy ?
Raw unlabeled data points x1, x2, Classifier Oracle/Expert: Provides labels for queries At any time during the AL process, we have a current guess for the classifier AL Strategy: Query the point closest to the decision boundary Not clear whether heuristics lead to optimal querying behavior Not clear which hard coded heuristic is good for a task at hand Can we learn the best active learning strategy ?
[]
GEM-SciDuet-train-111#paper-1297#slide-1
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-1
Agent based Active Learning
Need to train an AL agent to tell what data to select next, given the previously selected data the pool of unlabeled data available the underlying classifier, Raw unlabeled learned data so points far x1, x2, Classifier Oracle/Expert: Provides labels for queries
Need to train an AL agent to tell what data to select next, given the previously selected data the pool of unlabeled data available the underlying classifier, Raw unlabeled learned data so points far x1, x2, Classifier Oracle/Expert: Provides labels for queries
[]
GEM-SciDuet-train-111#paper-1297#slide-2
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-2
AL Query Strategy by an Agent
Raw unlabeled data points x1, x2, The Tutoring AL Agent & Learning Student (Classifier) Oracle/Expert: Provides labels for queries
Raw unlabeled data points x1, x2, The Tutoring AL Agent & Learning Student (Classifier) Oracle/Expert: Provides labels for queries
[]
GEM-SciDuet-train-111#paper-1297#slide-4
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-4
Training Agents Policy
IDEA: Lets train the agent based on AL simulation for a rich-data task and then transfer it to AL problem of interest This is Meta-Learning: Learning to Actively Learn Synthesize many AL problems Use Imitation/Reinforcement Learning algorithms
IDEA: Lets train the agent based on AL simulation for a rich-data task and then transfer it to AL problem of interest This is Meta-Learning: Learning to Actively Learn Synthesize many AL problems Use Imitation/Reinforcement Learning algorithms
[]
GEM-SciDuet-train-111#paper-1297#slide-6
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-6
Imitation Learning
The algorithmic oracle gives the correct action in each world state Train the agent (policy network) to prefer the correct action compared to incorrect ones (i.e. classification) The collected state-action pairs are not i.i.d. hence problematic for classifier learning Data Aggregation (DAGGER): Once in a while, use the predicted action by the policy network during training This is to make sure the policy sees bad states and the correct action to recover from them in the training time
The algorithmic oracle gives the correct action in each world state Train the agent (policy network) to prefer the correct action compared to incorrect ones (i.e. classification) The collected state-action pairs are not i.i.d. hence problematic for classifier learning Data Aggregation (DAGGER): Once in a while, use the predicted action by the policy network during training This is to make sure the policy sees bad states and the correct action to recover from them in the training time
[]
GEM-SciDuet-train-111#paper-1297#slide-7
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-7
Algorithmic Oracle
It computes the correct action in each world state Re-train the underlying model using all possible queries/actions Mark the one leading to the most accurate prediction on the evaluation set argmax(xi,yi) in Pool Accuracy ( Retrain( , xi,yi) Too slow for typical large pools of data IDEA: Randomly sample a subset and maximize over it Leads to efficient training and effective learned policies
It computes the correct action in each world state Re-train the underlying model using all possible queries/actions Mark the one leading to the most accurate prediction on the evaluation set argmax(xi,yi) in Pool Accuracy ( Retrain( , xi,yi) Too slow for typical large pools of data IDEA: Randomly sample a subset and maximize over it Leads to efficient training and effective learned policies
[]
GEM-SciDuet-train-111#paper-1297#slide-8
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-8
Experiments Task 1 text classification
Sentiment Classification: Positive/Negative sentiment of a review Train the AL policy on one product, and apply to the reviews of another Authorship Profiling: Gender of the author of a tweet Train the AL policy on one language, and apply to another Direct transfer: Initialize the classifier on the source data, without Cold-start: Start training the classifier from random initialization, continue training with AL agent Warm-start: Start training the classifier from the pre-trained model on the source data, continue training with AL agent
Sentiment Classification: Positive/Negative sentiment of a review Train the AL policy on one product, and apply to the reviews of another Authorship Profiling: Gender of the author of a tweet Train the AL policy on one language, and apply to another Direct transfer: Initialize the classifier on the source data, without Cold-start: Start training the classifier from random initialization, continue training with AL agent Warm-start: Start training the classifier from the pre-trained model on the source data, continue training with AL agent
[]
GEM-SciDuet-train-111#paper-1297#slide-9
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-9
Experiments Baseline methods
PAL (Fang et al., 2017) : A deep reinforcement learning based approach, they designed a Q-network for stream-based AL
PAL (Fang et al., 2017) : A deep reinforcement learning based approach, they designed a Q-network for stream-based AL
[]
GEM-SciDuet-train-111#paper-1297#slide-11
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-11
Analysis Insight on the selected data
We use MRR(Mean reciprocal rank) and acc to show the agreement of queried data points returned by our AL agent and other strategies.
We use MRR(Mean reciprocal rank) and acc to show the agreement of queried data points returned by our AL agent and other strategies.
[]
GEM-SciDuet-train-111#paper-1297#slide-12
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-12
Analysis Sensitivity to K size of unlabeled subset
K: size of subset from the original unlabelled set
K: size of subset from the original unlabelled set
[]
GEM-SciDuet-train-111#paper-1297#slide-14
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-14
Related work
Meta learning eg learning to learn without gradient descent by gradient descent (Chen et al 2016) Stream-based AL as MDP; learning the policy with reinforcement learning (Fang et al, 2017) suffers from the credit assignment problem (Bechman et al 2017) Imitation Learning: Lerning from expert demonstrations eg (Schaal
Meta learning eg learning to learn without gradient descent by gradient descent (Chen et al 2016) Stream-based AL as MDP; learning the policy with reinforcement learning (Fang et al, 2017) suffers from the credit assignment problem (Bechman et al 2017) Imitation Learning: Lerning from expert demonstrations eg (Schaal
[]
GEM-SciDuet-train-111#paper-1297#slide-15
1297
Learning How to Actively Learn: A Deep Imitation Learning Approach
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant.", "Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost.", "It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained.", "For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap.", "Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator.", "Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit char-acteristics inherent to a given problem.", "The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries.", "However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries.", "This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem.", "Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores.", "In this work, we formulate learning AL strategies as an imitation learning problem.", "In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data.", "Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011) , we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states.", "We then use a deep feedforward network to learn the AL policy to associate states to actions.", "Unlike the RL approach, our method can get observations and actions directly from the expert's trajectory.", "Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies.", "We evaluate our method on text classification and named entity recognition.", "The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries.", "An open source implementation of our model is available at: https://github.com/Grayming/ ALIL.", "Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g.", "a human annotator.", "The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most.", "The main challenge in AL is how to identify and select the most beneficial unlabelled data points.", "Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010) .", "However, there is no one AL heuristic which performs best for all problems.", "The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics.", "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) .", "The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden.", "This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy.", "Once the AL strategy is learned on simulations, it is then applied to real AL scenarios.", "The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", "We are interested to train a model m φ φ φ which maps an input x x x ∈ X to its label y y y ∈ Y x x x , where Y x x x is the set of labels for the input x x x and φ φ φ is the parameter vector of the underling model.", "For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g.", "in the IBO format.", "Let D = {(x x x, y y y)} be a support set of labeled data, which is randomly partitioned into labeled D lab , unlabelled D unl , and evaluation D evl datasets.", "Repeated random partitioning creates multiple instances of the AL problem.", "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint x x x t ∈ D unl t .", "As the result of this action, the followings happen: • The automatic oracle reveals the label y y y t ; • The labeled and unlabelled datasets are up-dated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φ φ φ; and • The AL algorithm receives a reward −loss(m φ φ φ , D evl ), which is the negative loss of the current trained model on the evaluation set, defined as loss(m φ φ φ , D evl ) := (x x x,y y y)∈D evl loss(m φ φ φ (x x x), y y y) where loss(y y y , y y y) is the loss incurred due to predicting y y y instead of the ground truth y y y.", "More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, P r(s s s t+1 |s s s t , a t ), R) where S is the state space, A is the set of actions, P r(s s s t+1 |s s s t , a t ) is the transition function, and R is the reward function.", "The state s s s t ∈ S at time t consists of the labeled D lab t and unlabelled D unl t datasets paired with the parameters of the currently trained model φ t .", "An action a t ∈ A corresponds to the selection of a query datapoint, and the reward function R(s s s t , a t , s s s t+1 ) := −loss(m φ φ φt , D evl ).", "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit.", "The optimal policy is found by maximising the following objective over the parameterised policies: E (D lab ,D unl ,D evl )∼D Eπ θ θ θ B t=1 R(s s st, at, s s st+1) (1) where π θ θ θ is the policy network parameterised by θ θ θ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a.", "an episode.", "Following (Bachman et al., 2017) , we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e.", "the model should perform well after each label query.", "Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1.", "Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman et al., 2017) e.g., using policy gradient methods.", "These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e.", "discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode.", "This exacerbates the difficulty of finding a good AL policy.", "We formulate learning for the AL policy as an imitation learning problem.", "At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert.", "The AL agent uses the sequence of states observed in an episode paired with the expert's sequence of actions to update its policy.", "This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches.", "In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1.", "Algorithmic Expert.", "At a given AL state s s s t , our algorithmic expert computes an action by evaluating the current pool of unlabeled data.", "More concretely, for each x x x ∈ D pool rnd and its correct label y y y , the underlying model m φ φ φt is re-trained to get m x x x φ φ φt , where D pool rnd ⊂ D unl t is a small subset of the current large pool of unlabeled data.", "The expert action is then computed as: arg min x x x ∈D pool rnd loss(m x x x φ φ φt (x x x), D evl ).", "(2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action.", "Searching for the optimal action would be O(|D unl | B ), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out.", "We will see in the experiments that our method efficiently learns effective AL policies.", "Policy Network.", "Our policy network is a feedforward network with two fully-connected hidden layers.", "It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score.", "The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional rep-resentation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset.", "Imitation Learning Algorithm.", "A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert's behaviour given training data of the encountered states (input) and actions (output) performed by the expert.", "The policy network's prediction affects future inputs during the execution of the policy.", "This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions.", "We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011) , an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1).", "In round τ of DAG-GER, the learned policy networkπ τ is applied to the AL problem to collect a sequence of states which are paired with the expert actions.", "The collected pair of states and actions are aggregated to the dataset of such pairs M , collected from the previous iterations of the algorithm.", "The policy network is then re-trained on the aggregated set, resulting inπ τ +1 for the next iteration of the algorithm.", "The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network.", "To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture ofπ τ and the expert policyπ * τ , i.e.", "π τ = β τπ * + (1 − β τ )π τ where β τ ∈ [0, 1] is a mixing coefficient.", "This amounts to tossing a coin with parameter β τ in each iteration of the algorithm to decide one of these two policies for data collection.", "Re-training the Policy Network.", "To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized.", "More specifically, let M := {(s s s i , a a a i )} I i=1 be the collected states paired with their expert's prescribed actions.", "Let D pool i be the set of unlabelled datapoints in the pool within the state, and a a a i denote the datapoint selected by the expert in the set.", "Our training objective is I i=1 log P r(a a a i |D pool i ) where P r(a a a i |D pool i ) := expπ(a a a i ; s s s i ) x x x∈D pool i expπ(x x x; s s s i ) .", "The above can be interpreted as the probability of a a a i being the best action among all possible actions in the state.", "Following (Mnih et al., 2015) , we randomly sample multiple 1 mini-batches from the replay memory M, in addition to the current round's stat-action pair, in order to retrain the policy network.", "For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm.", "Transferring the Policy.", "We now apply the policy learned on the source task to AL in the target task.", "We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics.", "Algorithm 2 illustrates the policy transfer.", "The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward.", "Experiments We conduct experiments on text classification and named entity recognition (NER).", "The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language.", "We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly.", "Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T , budget B, sample size K, the coin parameter β Output: The learned policy 1: M ← ∅ the aggregated dataset 2: initialiseπ1 with a random policy 3: for τ =1, .", ".", ".", ", T do 4: D lab , D unl , D evl ← dataPartition(D) 5: φ φ φ1 ← trainModel(D lab ) 6: c ← coinToss(β) 7: for t ∈ 1, .", ".", ".", ", B do 8: D pool rnd ← sampleUniform(D unl , K) 9: s s st ← (D lab , D pool rnd , φ φ φt) 10: a a at ← arg min x x x ∈D pool rnd loss(m x x x φ φ φ t , D evl ) 11: if c is head then the expert 12: x x xt ← a a at 13: else the policy 14: x φ ← retrainModel(φ φ φ, D lab ) 10: end for 11: return D lab and φ φ φ • Diversity sampling: The query datapoint is arg minx x x x x x ∈D lab Jaccard(x x x, x x x ), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure.", "x xt ← arg max x x x ∈D pool rndπ τ (x x x ; s s st) 15: end if 16: D lab ← D lab + {(x x xt, y y yt)} 17: D unl ← D unl − {x x xt} 18: M ← M + {(s s st, a a at)} 19: φ φ φt+1 ← retrainModel(φ φ φt, D • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxx x x − y p(y|x x x, D lab ) log p(y|x x x, D lab ) where p(y y y|x x x, D lab ) comes from the underlying model.", "We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015) , which not only considers uncertainty but also looks whether the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents.", "For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxx x x − |x x x| i=1 y i p(yi|x x x, D lab ) log p(yi|x x x, D lab ) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008) .", "• PAL: A reinforcement learning based approach (Fang et al., 2017) , which makes use a deep Q-network to make the selection decision for stream-based active learning.", "Text Classification Datasets and Setup.", "The first task is sentiment classification, in which product reviews express either positive or negative sentiment.", "The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics.", "The second task is Authorship Profiling, in which we aim to predict the gender of the text author.", "The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017) , which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt).", "For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics.", "The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese.", "We fix these word embeddings during training of both the policy and the underlying classification model.", "For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning.", "We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient β τ = 0.5.", "For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set.", "We show the test accuracy w.r.t.", "the number of labelled documents selected in the AL process.", "As the underlying model m φ φ φ , we use a fast and efficient text classifier based on convolutional neural networks.", "More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document x x x, where the width of the filters is 3.", "The filter outputs are averaged to produce a 50-dimensional document representation h h h(x x x), which is then fed into a softmax to predict the class.", "Results.", "Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios 2 .", "Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks.", "ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints.", "Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification.", "We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process.", "PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics.", "Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks.", "We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy.", "We further investigate combining the transfer of the policy network with the transfer of the underlying classifier.", "That is, we first train a classi- fier on all of the annotated data from the source domain/language.", "Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings.", "We start the AL process starting from the transferred classifier, referred to as the warmstart AL.", "We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios.", "The results are shown in Table 2 .", "We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2.", "As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start.", "The difference between the results are statistically significant, with a p-value of .001, according to McNemar test 3 (Dietterich, 1998) .", "musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2 : Classifiers performance under three different transfer settings.", "Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl).", "The original annotation is based on IOB1, which we convert to the IO labelling scheme.", "Following Fang et al.", "(2017) , we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy.", "The underlying model m φ φ φ is a conditional random field (CRF) treating NER as a sequence labelling task.", "The prediction is made using the Viterbi algorithm.", "In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb.", "During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode.", "We run N = 100 episodes with the budget B = 200, and set the sample size k = 5.", "When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa.", "Representing state-action.", "The input to the policy network includes the representation of the candidate sentence using the sum of its words' embeddings h h h(x x x), the representation of the labelling marginals using the label-level convolutional network cnn lab (E m φ φ φ (y y y|x x x) [y y y]) (Fang et al., 2017) , the representation of sentences in the labeled data diction |x x x| max y y y m φ φ φ (y y y|x x x), where |x x x| denotes the length of the sentence x x x.", "For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training.", "Results.", "Fig.", "3 shows the results for three target languages.", "In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017) , in both bilingual (bi) and multilingual (mul) transfer settings.", "Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE.", "This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information.", "In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de).", "In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch.", "Analysis Insight on the selected data.", "We compare the data selected by ALIL to other methods.", "This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics.", "For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling.", "Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would have been made by the heuristic methods, which is measured by the accuracy (acc).", "Table 3 report these measures.", "As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points.", "While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics.", "Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods.", "This is simply measure by the fraction of identical query data points among the total number of queries (i.e.", "accuracy).", "Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams.", "The expected accuracy numbers are reported in Table 3 .", "As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries.", "Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the pool of unlabelled data with size K, in order to make the policy training efficient.", "Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively.", "By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results.", "A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly.", "Dynamically changing β.", "In our algorithm, β plays an important role as it trades off exploration versus exploitation.", "In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1).", "We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively.", "We investigate the following schedules: (i) linear β τ = max(0.5, 1 − 0.01τ ), (ii) exponential β τ = 0.9 τ , and (iii) and inverse sigmoid β τ = 5 5+exp(τ /5) , as a function of iterations.", "Fig.", "5 shows the comparisons of these schedules.", "The learned policy seems to perform competitively with either a fixed or an exponential schedule.", "We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1).", "Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010) , such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011 ), query-by-committee (Gilad-Bachrach et al., 2006 , and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015) .", "Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time.", "Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017) .", "More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process.", "(Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions.", "(Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function.", "(Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data.", "The learned policy can then be transferred between languages or domains.", "Imitation learning (IL) refers to an agent's acquisition of skills or behaviours by observing an expert's trajectory in a given task.", "It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time.", "Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009) , AggreVaTe (Ross and Bagnell, 2014) , DaD (Venkatraman et al., 2015) , LOLS , DeeplyAggre-VaTe (Sun et al., 2017) .", "Our work is closely related to Dagger (Ross et al., 2011) , which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory.", "Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning.", "We formalize pool-based active learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool.", "Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned.", "We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "5", "6" ], "paper_header_content": [ "Introduction", "Pool-based AL as a Decision Process", "Deep Imitation Learning to Train the AL Policy", "Experiments", "Text Classification", "Named Entity Recognition", "Analysis", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-111#paper-1297#slide-15
Conclusion
Use heuristics or learn an agent for the AL query strategy. Agent-based AL as a Markov Decision Process. Formulate learning AL strategies/policies as an imitation learning problem. Our imitation learning approach performs better than previous heuristic-based and RL-based methods.
Use heuristics or learn an agent for the AL query strategy. Agent-based AL as a Markov Decision Process. Formulate learning AL strategies/policies as an imitation learning problem. Our imitation learning approach performs better than previous heuristic-based and RL-based methods.
[]
GEM-SciDuet-train-112#paper-1298#slide-0
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-0
Creativity
I Can machine learning models be creative? I Can these models compose novel and interesting narrative? I We focus on sonnet generation in this work.
I Can machine learning models be creative? I Can these models compose novel and interesting narrative? I We focus on sonnet generation in this work.
[]
GEM-SciDuet-train-112#paper-1298#slide-1
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-1
Sonnets
Shall I compare thee to a summers day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summers lease hath all too short a date: I A distinguishing feature of poetry is its aesthetic forms, e.g. rhyme and I Rhyme: {day May}; {temperate, date}.
Shall I compare thee to a summers day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summers lease hath all too short a date: I A distinguishing feature of poetry is its aesthetic forms, e.g. rhyme and I Rhyme: {day May}; {temperate, date}.
[]
GEM-SciDuet-train-112#paper-1298#slide-2
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-2
Modelling Approach
I We treat the task of poem generation as a constrained language modelling task. I Given a rhyming scheme, each line follows a canonical meter and has a fixed I We focus specifically on sonnets as it is a popular type of poetry (sufficient data) and has regular rhyming (ABAB, AABB or ABBA) and stress pattern (iambic pentameter). I We train an unsupervised model of language, rhyme and meter on a corpus of
I We treat the task of poem generation as a constrained language modelling task. I Given a rhyming scheme, each line follows a canonical meter and has a fixed I We focus specifically on sonnets as it is a popular type of poetry (sufficient data) and has regular rhyming (ABAB, AABB or ABBA) and stress pattern (iambic pentameter). I We train an unsupervised model of language, rhyme and meter on a corpus of
[]
GEM-SciDuet-train-112#paper-1298#slide-3
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-3
Sonnet Corpus
I We first create a generic poetry document collection using GutenTag tool, based on its inbuilt poetry classifier. I We then extract word and character statistics from Shakespeares 154 sonnets. I We use the statistics to filter out all non-sonnet poems, yielding our sonnet corpus.
I We first create a generic poetry document collection using GutenTag tool, based on its inbuilt poetry classifier. I We then extract word and character statistics from Shakespeares 154 sonnets. I We use the statistics to filter out all non-sonnet poems, yielding our sonnet corpus.
[]
GEM-SciDuet-train-112#paper-1298#slide-4
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-4
Model Architecture
(a) Language model (b) Pentameter model (c) Rhyme model
(a) Language model (b) Pentameter model (c) Rhyme model
[]
GEM-SciDuet-train-112#paper-1298#slide-5
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-5
Language Model LM
I LM is a variant of an LSTM encoderdecoder model with attention. I Encoder encodes preceding contexts, i.e. all sonnet lines before the current line. I Decoder decodes one word at a time for the current line, while attending to the I Preceding context is filtered by a selective mechanism. I Character encodings are incorporated for decoder input words. I Input and output word embeddings are tied.
I LM is a variant of an LSTM encoderdecoder model with attention. I Encoder encodes preceding contexts, i.e. all sonnet lines before the current line. I Decoder decodes one word at a time for the current line, while attending to the I Preceding context is filtered by a selective mechanism. I Character encodings are incorporated for decoder input words. I Input and output word embeddings are tied.
[]
GEM-SciDuet-train-112#paper-1298#slide-6
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-6
Pentameter Model PM
I PM is designed to capture the alternating stress pattern. I Given a sonnet line, PM learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially. Shall I compare thee to a summers day? I PM fashioned as an encoderdecoder model. I Encoder encodes the characters of a sonnet line. I Decoder attends to the character encodings to predict the stresses. I Decoder states are not used in prediction. I Attention networks focus on characters whose position is monotonically increasing. I In addition to cross-entropy loss, PM is regularised further with two auxilliary objectives that penalise repetition and low coverage.
I PM is designed to capture the alternating stress pattern. I Given a sonnet line, PM learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially. Shall I compare thee to a summers day? I PM fashioned as an encoderdecoder model. I Encoder encodes the characters of a sonnet line. I Decoder attends to the character encodings to predict the stresses. I Decoder states are not used in prediction. I Attention networks focus on characters whose position is monotonically increasing. I In addition to cross-entropy loss, PM is regularised further with two auxilliary objectives that penalise repetition and low coverage.
[]
GEM-SciDuet-train-112#paper-1298#slide-7
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-7
Rhyme Model
I We learn rhyme in an unsupervised fashion for 2 reasons: I Extendable to other languages that dont have pronunciation dictionaries; I The language of our sonnets is not Modern English, so contemporary pronunciation dictionaries may not be accurate. I Assumption: rhyme exists in a quatrain. I Feed sentence-ending word pairs as input to the rhyme model and train it to separate rhyming word pairs from non-rhyming ones. Shall I compare thee to a summers day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summers lease hath all too short a date: ut ur I top(Q, k) returns the k-th largest element in Q. I Intuitively the model is trained to learn a sufficient margin that separates the best pair from all others, with the second-best being used to quantify all others.
I We learn rhyme in an unsupervised fashion for 2 reasons: I Extendable to other languages that dont have pronunciation dictionaries; I The language of our sonnets is not Modern English, so contemporary pronunciation dictionaries may not be accurate. I Assumption: rhyme exists in a quatrain. I Feed sentence-ending word pairs as input to the rhyme model and train it to separate rhyming word pairs from non-rhyming ones. Shall I compare thee to a summers day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summers lease hath all too short a date: ut ur I top(Q, k) returns the k-th largest element in Q. I Intuitively the model is trained to learn a sufficient margin that separates the best pair from all others, with the second-best being used to quantify all others.
[]
GEM-SciDuet-train-112#paper-1298#slide-8
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-8
Joint Training
I All components trained together by treating each component as a sub-task in a I Although the components (LM, PM and RM) appear to be disjointed, shared parameters allow the components to mutually influence each other during training. I If each component is trained separately, PM performs poorly.
I All components trained together by treating each component as a sub-task in a I Although the components (LM, PM and RM) appear to be disjointed, shared parameters allow the components to mutually influence each other during training. I If each component is trained separately, PM performs poorly.
[]
GEM-SciDuet-train-112#paper-1298#slide-9
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-9
Evaluation Crowdworkers
I Crowdworkers are presented with a pair of poems (one machine-generated and one human-written), and asked to guess which is the human-written one. I LM: vanilla LSTM language model; I LM: LSTM language model that incorporates both character encodings and I LM+PM+RM: the full model, with joint training of the language, pentameter and
I Crowdworkers are presented with a pair of poems (one machine-generated and one human-written), and asked to guess which is the human-written one. I LM: vanilla LSTM language model; I LM: LSTM language model that incorporates both character encodings and I LM+PM+RM: the full model, with joint training of the language, pentameter and
[]
GEM-SciDuet-train-112#paper-1298#slide-10
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-10
Evaluation Crowdworkers 2
I Accuracy improves LM LM LM+PM+RM, indicating generated quatrains are I Are workers judging poems using just rhyme? I Test with LM+RM reveals thats the case. I Meter/stress is largely ignored by laypersons in poetry evaluation.
I Accuracy improves LM LM LM+PM+RM, indicating generated quatrains are I Are workers judging poems using just rhyme? I Test with LM+RM reveals thats the case. I Meter/stress is largely ignored by laypersons in poetry evaluation.
[]
GEM-SciDuet-train-112#paper-1298#slide-11
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-11
Evaluation Expert
Model Meter Rhyme Read. Emotion I A literature expert is asked to judge poems on the quality of meter, rhyme, I Full model has the highest meter and rhyme ratings, even higher than human, reflecting that poets regularly break rules. I Despite excellent form, machine-generated poems are easily distinguished due to lower emotional impact and readability. I Vanilla language model (LM) captures meter surprisingly well.
Model Meter Rhyme Read. Emotion I A literature expert is asked to judge poems on the quality of meter, rhyme, I Full model has the highest meter and rhyme ratings, even higher than human, reflecting that poets regularly break rules. I Despite excellent form, machine-generated poems are easily distinguished due to lower emotional impact and readability. I Vanilla language model (LM) captures meter surprisingly well.
[]
GEM-SciDuet-train-112#paper-1298#slide-12
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-12
Summary
I We introduce a joint neural model that learns language, rhyme and stress in an I We encode assumptions we have about the rhyme and stress in the architecture of I Model can be adapted to poetry in other languages. I We assess the quality of generated poems using judgements from crowdworkers and a literature expert. I Our results suggest future research should look beyond forms, towards the substance of good poetry. I Code and data: https://github.com/jhlau/deepspeare
I We introduce a joint neural model that learns language, rhyme and stress in an I We encode assumptions we have about the rhyme and stress in the architecture of I Model can be adapted to poetry in other languages. I We assess the quality of generated poems using judgements from crowdworkers and a literature expert. I Our results suggest future research should look beyond forms, towards the substance of good poetry. I Code and data: https://github.com/jhlau/deepspeare
[]
GEM-SciDuet-train-112#paper-1298#slide-13
1298
Deep-speare: A joint neural model of poetic language, meter and rhyme
In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204 ], "paper_content_text": [ "Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes?", "Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; , the design of sculptures (Lehman et al., 2016) , and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016) .", "In this paper, we focus on a creative textual task: automatic poetry composition.", "A distinguishing feature of poetry is its aesthetic forms, e.g.", "rhyme and rhythm/meter.", "1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets (e.g.", "Walt Whitman) and poetry types (such as free verse or haiku).", "Shall I compare thee to a summer's day?", "Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses.", "Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g.", "see Figure 1 ), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets.", "Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017) ; our results suggest that future research should look beyond meter and focus on improving readability.", "In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.", "2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013) .", "The earliest attempt at using statistical modelling for poetry generation was Greene et al.", "(2010) , based on a language model paired with a stress model.", "Neural networks have dominated recent research.", "Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al.", "(2016) later simplified by incorporating an attention mechanism and training at the character level.", "For English poetry, Ghazvininejad et al.", "(2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation.", "Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry.", "A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically.", "Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); 3 an example quatrain is presented in Figure 1 .", "It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme.", "A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.", ": S − S + S − S + S − S + S − S + S − S + Shall I compare thee to a summer's day?", "where S − and S + denote unstressed and stressed syllables, respectively.", "A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG.", "There are a number of variants, however, mostly seen in the quatrains; e.g.", "AABB or ABBA are also common.", "We build our sonnet dataset from the latest image of Project Gutenberg.", "4 We first create a Train 2685 367K Dev 335 46K Test 335 46K Table 1 : SONNET dataset statistics.", "Partition #Sonnets #Words (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015) , based on its inbuilt poetry classifier and rule-based structural tagging of individual poems.", "Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the \"BACKGROUND\" dataset), leaving the sonnet corpus (\"SONNET\").", "5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision.", "BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets.", "Statistics of SON-NET are given in Table 1 .", "6 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words.", "Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.", "7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain.", "For generation we use the language model to generate one word at a time, while applying the pentame-5 The following constraints were used to select sonnets: 8.0 mean words per line 11.5; 40 mean characters per line 51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line 0.59.", "6 The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English.", "The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model.", "7 There are a number of variations in addition to the standard pattern (Greene et al., 2010 ), but our model uses only the standard pattern as it is the dominant one.", "We train all the components together by treating each component as a sub-task in a multitask learning setting.", "8 Language Model The language model is a variant of an LSTM encoder-decoder model with attention (Bahdanau et al., 2015) , where the encoder encodes the preceding context (i.e.", "all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context.", "In the encoder, we embed context words z i using embedding matrix W wrd to yield w i , and feed them to a biLSTM 9 to produce a sequence of encoder hidden states h i = [ h i ; h i ].", "Next we apply a selective mechanism (Zhou et al., 2017) to each h i .", "By defining the representation of the whole context h = [ h C ; h 1 ] (where C is the number of words in the context), the selective mechanism filters the hidden states h i using h as follows: h i = h i σ(W a h i + U a h + b a ) where denotes element-wise product.", "Hereinafter W, U and b are used to refer to model parameters.", "The intuition behind this procedure is to selectively filter less useful elements from the context words.", "In the decoder, we embed words x t in the current line using the encoder-shared embedding matrix (W wrd ) to produce w t .", "In addition to the word embeddings, we also embed the characters of a word using embedding matrix W chr to produce c t,i , and feed them to a bidirectional (character-level) LSTM: u t,i = LSTM f (c t,i , u t,i−1 ) u t,i = LSTM b (c t,i , u t,i+1 ) (1) We represent the character encoding of a word by concatenating the last forward and first back-ward hidden states u t = [ u t,L ; u t,1 ], where L is the length of the word.", "We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).", "10 The rationale for sharing the parameters is that we see word stress and language model information as complementary.", "Given the word embedding w t and character encoding u t , we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: s t = LSTM([w t ; u t ], s t−1 ) (2) We attend s t to encoder hidden states h i and compute the weighted sum of h i as follows: e t i = v b tanh(W b h i + U b s t + b b ) a t = softmax(e t ) h * t = i a t i h i To combine s t and h * t , we use a gating unit similar to a GRU Chung et al., 2014) : s t = GRU(s t , h * t ).", "We then feed s t to a linear layer with softmax activation to produce the vocabulary distribution (i.e.", "softmax(W out s t + b out ), and optimise the model with standard categorical cross-entropy loss.", "We use dropout as regularisation (Srivastava et al., 2014) , and apply it to the encoder/decoder LSTM outputs and word embedding lookup.", "The same regularisation method is used for the pentameter and rhyme models.", "As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix W out and embedding matrix W wrd via a projection matrix W prj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017) : W out = tanh(W wrd W prj ) Pentameter Model This component is designed to capture the alternating iambic stress pattern.", "Given a sonnet line, 10 We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model.", "This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms.", "the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.", "11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters.", "Like the language model, the pentameter model is fashioned as an encoder-decoder network.", "In the encoder, we embed the characters using the shared embedding matrix W chr and feed them to the shared bidirectional character-level LSTM (Equation (1) ) to produce the character encodings for the sentence: u j = [ u j ; u j ].", "In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: g t = LSTM(u * t−1 , g t−1 ) where u * t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next, 12 and g t is fed to a linear layer with softmax activation to compute the stress distribution.", "The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially).", "We first compute µ t , the mean position of focus: µ t = σ(v c tanh(W c g t + U c µ t−1 + b c )) µ t = M × min(µ t + µ t−1 , 1.0) where M is the number of characters in the sonnet line.", "Given µ t , we can compute the (unnormalised) probability for each character position: p t j = exp −(j − µ t ) 2 2T 2 where standard deviation T is a hyper-parameter.", "We incorporate this position information when computing u * t : 13 u j = p t j u j d t j = v d tanh(W d u j + U d g t + b d ) f t = softmax(d t + log p t ) u * t = j b t j u j 11 That is, given the input line Shall I compare thee to a summer's day?", "the model is required to output S − S + S − S + S − S + S − S + S − S + , based on the syllable boundaries from Section 3.", "12 Initial input (u * 0 ) and state (g0) is a trainable vector and zero vector respectively.", "13 Spaces are masked out, so they always yield zero attention weights.", "Intuitively, the attention network incorporates the position information at two points, when computing: (1) d t j by weighting the character encodings; and (2) f t by adding the position log probabilities.", "This may appear excessive, but preliminary experiments found that this formulation produces the best performance.", "In a typical encoder-decoder model, the attended encoder vector u * t would be combined with the decoder state g t to compute the output probability distribution.", "Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u * t to predict the alternating stresses based on g t .", "For this reason we use only u * t to compute the stress probability: P (S − ) = σ(W e u * t + b e ) which gives the loss L ent = t − log P (S t ) for the whole sequence, where S t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information.", "To regularise the model further, we introduce two loss penalties: repeat and coverage loss.", "The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017) , and is computed as follows: L rep = t j min(f t j , t−1 t=1 f t j ) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights.", "The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention.", "Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: L cov = j∈V ReLU(C − 10 t=1 f t j ) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty.", "To summarise, the pentameter model is optimised with the following loss: L pm = L ent + αL rep + βL cov (3) where α and β are hyper-parameters for weighting the additional loss terms.", "Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data.", "Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones.", "Note that the model does not assume any particular rhyming scheme -it works as long as quatrains have rhyme.", "A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e.", "{(x t , x r ), (x t , x r+1 ), (x t , x r+2 )}, where x t is the target word and x r+i are the reference words.", "14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs.", "From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words.", "Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter.", "For each word x in the word pairs we embed the characters using the shared embedding matrix W chr and feed them to an LSTM to produce the character states u j .", "15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared.", "We represent the encoding of the whole word by taking the last state u = u L , where L is the character length of the word.", "Given the character encodings, we use a 14 E.g.", "for the quatrain in Figure 1 , a training example is {(day, temperate), (day, may), (day, date)}.", "15 The character embeddings are the only shared parameters in this model.", "margin-based loss to optimise the model: Q = {cos(u t , u r ), cos(u t , u r+1 ), ...} L rm = max(0, δ − top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter.", "Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others.", "This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014) .", "With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme.", "Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry.", "During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step.", "Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; 16 (3) any generated words with a frequency 2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.", "17 The first sonnet line is generated without using any preceding context.", "We next describe how to incorporate the pentameter model for generation.", "Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter.", "We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ).", "We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1.", "To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary.", "Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme.", "We resample the second word of a rhyming pair (e.g.", "when generating the second A in AABB) until it produces a cosine similarity 0.9.", "We also resample the second word of a nonrhyming pair (e.g.", "when generating the first B in AABB) by requiring a cosine similarity 0.7.", "18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes.", "This problem is resolved in our model by reversing the direction of the language model, i.e.", "generating the last word of each line first.", "We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input.", "Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert.", "A sample of machine-generated sonnets are included in the supplementary material.", "We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material).", "Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training.", "For optimisers, we use Adagrad (Duchi et al., 2011 ) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models.", "We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens.", "Component Evaluation Language Model We use standard perplexity for evaluating the language model.", "In terms of model variants, we have: 19 • LM: Vanilla LSTM language model; • LM * : LSTM language model that incorporates character encodings (Equation (2) Table 2 : Component evaluation for the language model (\"Ppl\" = perplexity), pentameter model (\"Stress Acc\"), and rhyme model (\"Rhyme F1\").", "Each number is an average across 10 runs.", "• LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014) ; 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models.", "Perplexity on the test partition is detailed in Table 2.", "Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * .", "The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks.", "The full model LM * * +PM+RM, which learns stress and rhyme patterns simultaneously, also appears to improve the language model slightly.", "Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.", "21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded.", "We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns.", "To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g.", "1st time step = S − ) to the word if any of its characters receives an attention 0.20.", "For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017) .", "22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1-5 stresses and the full word sequence produces iambic pentameter.", "It is trained using the EM algorithm on a sonnet corpus developed by the authors.", "We present stress accuracy in Table 2 .", "LM * * +PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors.", "To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare's Sonnet 18 in Figure 3 .", "The y-axis represents the ten stresses of the iambic pentameter, and Table 3 : Rhyming errors produced by the model.", "Examples on the left (right) side are rhyming (non-rhyming) word pairs -determined using the CMU dictionary -that have low (high) cosine similarity.", "\"Cos\" denote the system predicted cosine similarity for the word pair.", "x-axis the characters of the sonnet line (punctuation removed).", "The attention network appears to perform very well, without any noticeable errors.", "The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y.", "Additional heatmaps for the full sonnet are provided in the supplementary material.", "Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score.", "Word pairs that are not included in the dictionary are discarded.", "Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match.", "We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, 23 it is considered to rhyme.", "As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match.", "The extracted sequence can be interpreted as a proxy for the last syllable of a word.", "Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM.", "There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines the pairwise rhyme strength between two words.", "The model's objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ.", "We train this model (Rhyme-EM) on our data 24 and use the learnt θ to decide whether two words rhyme.", "25 Table 2 details the rhyming results.", "The rhyme model performs very strongly at F1 > 0.90, well above both baselines.", "Rhyme-EM performs poorly because it operates at the word level (i.e.", "it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.", "26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3 .", "Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs.", "Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters.", "For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme.", "Generation Evaluation Crowdworker Evaluation Following Hopkins and Kiela (2017) , we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem.", "Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model).", "We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch.", "An equal number of human-written quatrains was sampled from the training partition.", "A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT.", "Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- 24 We use the original authors' implementation: https: //github.com/jvamvas/rhymediscovery.", "25 A word pair is judged to rhyme if θw 1 ,w 2 0.02; the threshold (0.02) is selected based on development performance.", "26 Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero.", "Table 5 : Expert mean and standard deviation ratings on several aspects of the generated quatrains.", "cally, and they were restricted to do a maximum of 3 HITs.", "To dissuade workers from using search engines to identify real poems, we presented the quatrains as images.", "Accuracy is presented in Table 4 .", "We see a steady decrease in accuracy (= improvement in model quality) from LM to LM * * to LM * * +PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones.", "Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM * * +RM, which is the full model without the pentameter component.", "We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements.", "These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation.", "Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e.", "amount of emotion the poem evokes).", "All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best).", "In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human).", "The expert was blind to the source of each poem.", "The mean and standard deviation of the ratings are presented in Table 5 .", "We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets.", "This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997) .", "Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability.", "In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models.", "Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model.", "Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry.", "Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets.", "We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert.", "Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well.", "Machine-generated generated poems, however, still underperform in terms of readability and emotion." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "5.1.1", "5.1.2", "5.1.3", "5.2.1", "5.2.2", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Sonnet Structure and Dataset", "Architecture", "Language Model", "Pentameter Model", "Rhyme Model", "Generation Procedure", "Experiments", "Language Model", "Pentameter Model", "Rhyme Model", "Crowdworker Evaluation", "Expert Judgement", "Conclusion" ] }
GEM-SciDuet-train-112#paper-1298#slide-13
Untitled
in darkness to behold him, with a light and him was filled with terror on my breast and saw its brazen ruler of the night but, lo! it was a monarch of the rest
in darkness to behold him, with a light and him was filled with terror on my breast and saw its brazen ruler of the night but, lo! it was a monarch of the rest
[]
GEM-SciDuet-train-113#paper-1300#slide-0
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-0
Who cares about stock movements
ft q he road bono ay hat" lt Ye wie ome Feb Mr -M Mey in No one would be unhappy if they could predict stock movements
ft q he road bono ay hat" lt Ye wie ome Feb Mr -M Mey in No one would be unhappy if they could predict stock movements
[]
GEM-SciDuet-train-113#paper-1300#slide-1
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-1
Background
I Two mainstreams in finance: technical and fundamental analysis I Two main content resources in NLP: public news and social media I History of NLP models Feature engineering (before 2010) Hierarchical attention nets (2018)
I Two mainstreams in finance: technical and fundamental analysis I Two main content resources in NLP: public news and social media I History of NLP models Feature engineering (before 2010) Hierarchical attention nets (2018)
[]
GEM-SciDuet-train-113#paper-1300#slide-2
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-2
However it has never been easy
The market is highly stochastic, and we make temporally-dependent predictions from chaotic data.
The market is highly stochastic, and we make temporally-dependent predictions from chaotic data.
[]
GEM-SciDuet-train-113#paper-1300#slide-3
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-3
Divide and Treat
When a company suffers from a major scandal on a trading day, its stock price will have a downtrend in the coming trading days Public information needs time to be absorbed into movements over time (Luss and dAspremont, 2015), and thus is largely shared across temporally-close predictions
When a company suffers from a major scandal on a trading day, its stock price will have a downtrend in the coming trading days Public information needs time to be absorbed into movements over time (Luss and dAspremont, 2015), and thus is largely shared across temporally-close predictions
[]
GEM-SciDuet-train-113#paper-1300#slide-4
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-4
Divide and treat
Chaotic market information Market Information Encoder High market stochasticity Variational Movement Decoder Random walk theory (Malkiel, 1999) Temporally-dependent prediction Attentive Temporal Auxiliary When a company suffers from a major scandal on a trading day, its stock price will have a downtrend in the coming trading days Public information needs time to be absorbed into movements over time (Luss and dAspremont, 2015), and thus is largely shared across temporally-close predictions
Chaotic market information Market Information Encoder High market stochasticity Variational Movement Decoder Random walk theory (Malkiel, 1999) Temporally-dependent prediction Attentive Temporal Auxiliary When a company suffers from a major scandal on a trading day, its stock price will have a downtrend in the coming trading days Public information needs time to be absorbed into movements over time (Luss and dAspremont, 2015), and thus is largely shared across temporally-close predictions
[]
GEM-SciDuet-train-113#paper-1300#slide-5
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-5
Problem Formulation
I We estimate the binary movement where 1 denotes rise and 0 denotes fall I Target trading day: d I We use the market information comprising relevant tweets, and historical prices, in the lag [d d d where d is a fixed lag size
I We estimate the binary movement where 1 denotes rise and 0 denotes fall I Target trading day: d I We use the market information comprising relevant tweets, and historical prices, in the lag [d d d where d is a fixed lag size
[]
GEM-SciDuet-train-113#paper-1300#slide-6
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-6
Generative Process
I T eligible trading days in the d lag I Encode observed market information as a random variable X = [x1; xT I Generate the latent driven factor I Generate stock movements
I T eligible trading days in the d lag I Encode observed market information as a random variable X = [x1; xT I Generate the latent driven factor I Generate stock movements
[]
GEM-SciDuet-train-113#paper-1300#slide-7
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-7
Factorization
I For multi-task learning, we model p (y |X p (y ,Z |X instead of p(yT |X p (yt |xt zt p (zt |z<t xt yt
I For multi-task learning, we model p (y |X p (y ,Z |X instead of p(yT |X p (yt |xt zt p (zt |z<t xt yt
[]
GEM-SciDuet-train-113#paper-1300#slide-8
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-8
Primary components
Market Information Encoder (MIE) Variational Movement Decoder (VMD) Infers Z with X y and decodes stock movements y from X ,Z Z Attentive Temporal Auxiliary (ATA) |D| Integrates temporal loss for training
Market Information Encoder (MIE) Variational Movement Decoder (VMD) Infers Z with X y and decodes stock movements y from X ,Z Z Attentive Temporal Auxiliary (ATA) |D| Integrates temporal loss for training
[]
GEM-SciDuet-train-113#paper-1300#slide-9
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-9
StockNet architecture
Temporal Attention hdec Variational decoder N (0, I) henc Variational encoder (b) Market Information Encoder (MIE) Historical Prices Input Attention Attention Attention (d) VAEs Bi-GRUs Message Embedding Layer
Temporal Attention hdec Variational decoder N (0, I) henc Variational encoder (b) Market Information Encoder (MIE) Historical Prices Input Attention Attention Attention (d) VAEs Bi-GRUs Message Embedding Layer
[]
GEM-SciDuet-train-113#paper-1300#slide-10
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-10
Variational Movement Decoder
I Goal: recurrently infer Z from X y and decode y from X ,Z I Challenge: posterior inference is intractable in our factorized model I Neural approximation and reparameterization I Adopt a posterior approximator
I Goal: recurrently infer Z from X y and decode y from X ,Z I Challenge: posterior inference is intractable in our factorized model I Neural approximation and reparameterization I Adopt a posterior approximator
[]
GEM-SciDuet-train-113#paper-1300#slide-11
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-11
Interface between VMD and ATA
I Integrate the deterministic feature ht and the latent variable zt gt tanh(Wg[xt ,hst zt bg) I Decode movement hypothesis: first auxiliary targets, then main target I Temporal attention: v
I Integrate the deterministic feature ht and the latent variable zt gt tanh(Wg[xt ,hst zt bg) I Decode movement hypothesis: first auxiliary targets, then main target I Temporal attention: v
[]
GEM-SciDuet-train-113#paper-1300#slide-12
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-12
Attentive Temporal Auxiliary
I Break down the approximated L to temporal objectives f RT1 ft log p (yt |xt zt I Reuse v to build the final temporal weight vector v R1T where controls the overall auxiliary effects
I Break down the approximated L to temporal objectives f RT1 ft log p (yt |xt zt I Reuse v to build the final temporal weight vector v R1T where controls the overall auxiliary effects
[]
GEM-SciDuet-train-113#paper-1300#slide-13
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-13
Experimental setup
Two-year daily price movements of 88 stocks Two components: a Twitter dataset and a historical price dataset Development: 2 months, 2,555 movements I Lag window: 5 I Metrics: accuracy and Matthews Correlation Coefficient (MCC) I Comparative study: five baselines from different genres and five StockNet variations
Two-year daily price movements of 88 stocks Two components: a Twitter dataset and a historical price dataset Development: 2 months, 2,555 movements I Lag window: 5 I Metrics: accuracy and Matthews Correlation Coefficient (MCC) I Comparative study: five baselines from different genres and five StockNet variations
[]
GEM-SciDuet-train-113#paper-1300#slide-14
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-14
Baselines and variants
I RAND: a naive predictor making I ARIMA: Autoregressive Integrated I TSLDA (Nguyen and Shirai, 2015) I TECHNICALANALYST: from only prices I FUNDAMENTALANALYST: from only tweets I INDEPENDENTANALYST: optimizing only I DISCRIMINATIVEANALYST: a discriminative
I RAND: a naive predictor making I ARIMA: Autoregressive Integrated I TSLDA (Nguyen and Shirai, 2015) I TECHNICALANALYST: from only prices I FUNDAMENTALANALYST: from only tweets I INDEPENDENTANALYST: optimizing only I DISCRIMINATIVEANALYST: a discriminative
[]
GEM-SciDuet-train-113#paper-1300#slide-15
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-15
Results
Baseline models Acc. MCC StockNet variations Acc. MCC I The accuracy of is generally reported as a satisfying result I ARIMA: does not yield satisfying I Two best baselines: TSLDA and HAN I Two information sources are I Generative framework incorporates
Baseline models Acc. MCC StockNet variations Acc. MCC I The accuracy of is generally reported as a satisfying result I ARIMA: does not yield satisfying I Two best baselines: TSLDA and HAN I Two information sources are I Generative framework incorporates
[]
GEM-SciDuet-train-113#paper-1300#slide-16
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-16
Effects of temporal auxiliary
I The auxiliary weight controls overall auxiliary effects I Our models do not linearly benefit I Tweaking acts as a trade-off between focusing on the main target and generalizing by denoising
I The auxiliary weight controls overall auxiliary effects I Our models do not linearly benefit I Tweaking acts as a trade-off between focusing on the main target and generalizing by denoising
[]
GEM-SciDuet-train-113#paper-1300#slide-17
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-17
Summary
I We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media Better way to integrate fundamental information and technical indicators Other market signals, e.g. financial disclosures, periodic analyst reports and company profiles Investment simulation with modern portfolio theory I Dataset is available at https://github.com/yumoxu/stocknet-dataset
I We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media Better way to integrate fundamental information and technical indicators Other market signals, e.g. financial disclosures, periodic analyst reports and company profiles Investment simulation with modern portfolio theory I Dataset is available at https://github.com/yumoxu/stocknet-dataset
[]
GEM-SciDuet-train-113#paper-1300#slide-18
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-18
Appendix Market Information Encoder
Temporal input: xt = [ct ,pt I Multiple tweets with varied quality I Message embedding: Bi-GRU I Corpus embedding: messages I Price signals: the adjusted closing, pt pct p t h pl t ut softmax(wu tanh(Wm,uMt I Normalization ct Mtu t pt pt/pct1
Temporal input: xt = [ct ,pt I Multiple tweets with varied quality I Message embedding: Bi-GRU I Corpus embedding: messages I Price signals: the adjusted closing, pt pct p t h pl t ut softmax(wu tanh(Wm,uMt I Normalization ct Mtu t pt pt/pct1
[]
GEM-SciDuet-train-113#paper-1300#slide-19
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-19
Appendix Variational Inference
log p (yt |xt zt log p (y |X log p (y |X where the likelihood term p (yt |xt zt p (yt |xt zt if t T if t T
log p (yt |xt zt log p (y |X log p (y |X where the likelihood term p (yt |xt zt p (yt |xt zt if t T if t T
[]
GEM-SciDuet-train-113#paper-1300#slide-20
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-20
Appendix Attentive Temporal Auxiliary
Temporal Attention v i w i gT v d g T tanh(Wg,dG
Temporal Attention v i w i gT v d g T tanh(Wg,dG
[]
GEM-SciDuet-train-113#paper-1300#slide-21
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-21
Appendix Trading day Alignment
I We reorganize our inputs, including the tweet corpora and historical prices, by aligning them to the T trading days in a lag I Specifically, on the t th trading day, we recognize market signals from the corpusMt in [dt1,dt and the historical prices pt on dt1, for predicting the movement yt on dt
I We reorganize our inputs, including the tweet corpora and historical prices, by aligning them to the T trading days in a lag I Specifically, on the t th trading day, we recognize market signals from the corpusMt in [dt1,dt and the historical prices pt on dt1, for predicting the movement yt on dt
[]
GEM-SciDuet-train-113#paper-1300#slide-22
1300
Stock Movement Prediction from Tweets and Historical Prices
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected. 1
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240 ], "paper_content_text": [ "Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018) .", "We present a model to predict stock price movement from tweets and historical stock prices.", "In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative.", "Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013) .", "With the prevalence of deep neural networks (Le and Mikolov, 2014) , eventdriven approaches were studied with structured event representations (Ding et al., 2014 (Ding et al., , 2015 .", "More recently, Hu et al.", "(2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction.", "However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999) .", "Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015) .", "Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness.", "However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables.", "In essence, stock movement prediction is a time series problem.", "The significance of the temporal dependency between movement predictions is not addressed in existing NLP research.", "For instance, when a company suffers from a major scandal on a trading day d 1 , generally, its stock price will have a downtrend in the coming trading days until day d 2 , i.e.", "[d 1 , d 2 ].", "2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d 1 , d 2 ].", "Otherwise, the accuracy in this interval might be harmed.", "This predictive dependency is a result of the fact that public information, e.g.", "a company scandal, needs time to be absorbed into movements over time (Luss and d'Aspremont, 2015) , and thus is largely shared across temporally-close predictions.", "Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose StockNet, a deep generative model for stock movement prediction.", "To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables.", "Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014) , we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2).", "To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction.", "To fully exploit market information, StockNet directly learns from data without pre-extracting structured events.", "We build market sources by referring to both fundamental information, e.g.", "tweets, and technical features, e.g.", "historical stock prices (Section 5.1).", "3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window.", "We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3).", "We evaluate StockNet on a stock movement prediction task with a new dataset that we collected.", "Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings.", "Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e.", "tweets, and historical prices, in the lag [d − ∆d, d − 1] where ∆d is a fixed lag size.", "We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 p c d > p c d−1 (1) where p c d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g.", "dividends and splits.", "4 The adjusted closing 3 To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company.", "On the contrary, technical analysis considers only the trends and patterns of the stock price.", "4 Technically, d − 1 may not be an eligible trading day and thus has no available price information.", "In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017) .", "Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.", "5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material).", "We observe that there are a number of targets with exceptionally minor movement ratios.", "In a three-way stock trend prediction task, a common practice is to categorize these movements to another \"preserve\" class by setting upper and lower thresholds on the stock price change (Hu et al., 2018) .", "Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, -0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds.", "Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively.", "The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes.", "We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test.", "There are two main components in our dataset: 6 a Twitter dataset and a historical price dataset.", "We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g.", "\"\\$GOOG\\b\" for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009 ) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days.", "Details will be provided in Section 4.", "We use d here to make the formulation easier to follow.", "5 https://finance.yahoo.com/industries 6 Our dataset is available at https://github.com/ yumoxu/stocknet-dataset.", "mode, including for tokenization and treatment of hyperlinks, hashtags and the \"@\" identifier.", "To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag.", "We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.", "7 4 Model Overview Figure 1 : Illustration of the generative process from observed market information to stock movements.", "We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior.", "We provide an overview of data alignment, model factorization and model components.", "As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days.", "However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training.", "As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998) , we make movement predictions not only for d, but also other trading days existing in the lag.", "For instance, as shown in Figure 2 , for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample.", "The relations between these predictions can thus be captured within the scope of a sample.", "As shown in the instance above, not every single date in a lag is an eligible trading day, e.g.", "weekends and holidays.", "To better organize and use the input, we regard the trading day, instead of the calendar day used in existing research, as the basic unit for building samples.", "To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d − ∆d + 1, d].", "For clarity, in the scope of one sample, we index these trading days with t ∈ [1, T ], 8 and each of them maps to an actual (absolute) trading day d t .", "We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days.", "Specifically, on the tth trading day, we recognize market signals from the corpus M t in [d t−1 , d t ) and the historical prices p t on d t−1 , for predicting the movement y t on d t .", "We provide an aligned sample for illustration in Figure 2 .", "As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y 1 , .", ".", ".", ", y T ].", "The main target is y T while the remainder y * = [y 1 , .", ".", ".", ", y T −1 ] serves as the temporal auxiliary target.", "We use these in addition to the main target to improve prediction accuracy (Section 5.3).", "We model the generative process shown in Figure 1.", "We encode observed market information as a random variable X = [x 1 ; .", ".", ".", "; x T ], from which we generate the latent driven factor Z = [z 1 ; .", ".", ".", "; z T ] for our prediction task.", "For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution p θ (y|X) = Z p θ (y, Z|X) instead of p θ (y T |X).", "We write the following factorization for generation, p θ (y, Z|X) = p θ (y T |X, Z) p θ (z T |z <T , X) (2) T −1 t=1 p θ (y t |x ≤t , z t ) p θ (z t |z <t , x ≤t , y t ) where for a given indexed matrix of T vectors [v 1 ; .", ".", ".", "; v T ], we denote by v <t and v ≤t the subma- trix [v 1 ; .", ".", ".", "; v t−1 ] and the submatrix [v 1 ; .", ".", ".", "; v t ], respectively.", "Since y * is known in generation, we use the posterior p θ (z t |z <t , x ≤t , y t ) , t < T to incorporate market signals more accurately and only use the prior p θ (z T |z <T , X) when generating z T .", "Besides, when t < T , y t is independent of z <t while our main prediction target, y T is made dependent on z <T through a temporal attention mechanism (Section 5.3).", "We show StockNet modeling the above generative process in Figure 2 .", "In a nutshell, StockNet Figure 2 : The architecture of StockNet.", "We use the main target of 07/08/2012 and the lag size of 5 for illustration.", "Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag.", "We use dashed lines to denote auxiliary components.", "Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective.", "z 1 z 2 z 3 h 2 h 3 02/08 Input Output h dec h enc µ log 2 z N (0, I) DKL ⇥ N (µ, 2 ) k N (0, I) ⇤ \" comprises three primary components following a bottom-up fashion, 1.", "Market Information Encoder (MIE) that encodes tweets and prices to X; 2.", "Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3.", "Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training.", "Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters.", "Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD.", "Each temporal input is defined as x t = [c t , p t ] (3) where c t and p t are the corpus embedding and the historical price vector, respectively.", "The basic strategy of acquiring c t is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality.", "To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well.", "Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively.", "Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈ [1, K], as W where W = s, ∈ [1, L], and its word embedding matrix as E = [e 1 ; e 2 ; .", ".", ".", "; e L ].", "We run the two GRUs as follows, − → h f = − −− → GRU(e f , − → h f −1 ) (4) ← − h b = ← −− − GRU(e b , ← − h b+1 ) (5) m = ( − → h + ← − h )/2 (6) where f ∈ [1, .", ".", ".", ", ], b ∈ [ , .", ".", ".", ", L].", "The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, − → h l , ← − h l , are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes-sage embedding matrix M t ∈ R dm×K .", "In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all M t in the batch with shared parameters.", "Tweet quality varies drastically.", "Inspired by the news-level attention (Hu et al., 2018) , we weight messages with their respective salience in collective intelligence measurement.", "Specifically, we first project M t non-linearly to u t , the normalized attention weight over the corpus, u t = ζ(w u tanh(W m,u M t )) (7) where ζ(·) is the softmax function and W m,u ∈ R dm×dm , w u ∈ R dm×1 are model parameters.", "Then we compose messages accordingly to acquire the corpus embedding, c t = M t u t .", "(8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vectorp t = p c t ,p h t ,p l t comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, p t =p t /p c t−1 − 1.", "We then concatenate c t with p t to form the final market information input x t for the decoder.", "Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X.", "Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq.", "(2) is intractable.", "Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e.", "the prior p θ (z t |z <t , x ≤t ) and the posterior p θ (z t |z <t , x ≤t , y t ), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014) .", "We first employ a variational approximator q φ (z t |z <t , x ≤t , y t ) for the intractable posterior.", "We observe the following factorization, q φ (Z|X, y) = T t=1 q φ (z t |z <t , x ≤t , y t ) .", "(9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the q φ (Z|X, y) and p θ (Z|X, y).", "Instead of optimizing it directly, we observe that the following equation naturally holds, log p θ (y|X) (10) =D KL [q φ (Z|X, y) p θ (Z|X, y)] +E q φ (Z|X,y) [log p θ (y|X, Z)] −D KL [q φ (Z|X, y) p θ (Z|X)] where D KL [q p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq.", "(2, 9) into Eq.", "(10) , L (θ, φ; X, y) (11) = T t=1 E q φ( zt|z<t,x ≤t ,yt) log p θ (y t |x ≤t , z ≤t ) − D KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] ≤ log p θ (y|X) where the likelihood term Li et al.", "(2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization.", "In their work, priors are modeled with p θ (z t ) ∼ N (0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity.", "In Eq.", "(11), we provide a more theoretically rigorous lower bound where the KL term with p θ (z t |z <t , x ≤t ) plays a dynamic role in inferring dependent latent variables for every different model input and latent history.", "p θ (y t |x ≤t , z ≤t ) = p θ (y t |x ≤t , z t ) , if t < T p θ (y T |X, Z) , if t = T. (12) Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, h s t = GRU(x t , h s t−1 ).", "(13) We let the approximator q φ (z t |z <t , x ≤t , y t ) subject to a standard multivariate Gaussian distribution N (µ, δ 2 I).", "We calculate µ and δ as µ t = W φ z,µ h z t + b φ µ (14) log δ 2 t = W φ z,δ h z t + b φ δ (15) and the shared hidden representation h z t as h z t = tanh(W φ z [z t−1 , x t , h s t , y t ] + b φ z ) (16) where W φ z,µ , W φ z,δ , W φ z are weight matrices and b φ µ , b φ δ , b φ z are biases.", "Since Gaussian distribution belongs to the \"location-scale\" distribution family, we can further reparameterize z t as z t = µ t + δ t (17) where denotes an element-wise product.", "The noise term ∼ N (0, I) naturally involves stochastic signals in our model.", "Similarly, We let the prior p θ (z t |z <t , x ≤t ) ∼ N (µ , δ 2 I).", "Its calculation is the same as that of the posterior except the absence of y t and independent model parameters, µ t = W θ o,µ h z t + b θ µ (18) log δ 2 t = W θ o,δ h z t + b θ δ (19) where h z t = tanh(W θ z [z t−1 , x t , h s t ] + b θ z ).", "(20) Following Zhang et al.", "(2016) , differently from the posterior, we set the prior z t = µ t during decoding.", "Finally, we integrate deterministic features and the final prediction hypothesis is given as g t = tanh(W g [x t , h s t , z t ] + b g ) (21) y t = ζ(W y g t + b y ), t < T (22) where W g , W y are weight matrices and b g , b y are biases.", "The softmax function ζ(·) outputs the confidence distribution over up and down.", "As introduced in Section 4, the decoding of the main target y T depends on z <T and thus lies at the interface between VMD and ATA.", "We will elaborate on it in the next section.", "Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictionsỸ * = [ỹ 1 ; .", ".", ".", ";ỹ T −1 ], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism.", "Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3 , temporal attention calculates their weights in these two contributions by employing two scoring components: an information score and a dependency score.", "Specifically, v i = w i tanh(W g,i G * ) (23) v d = g T tanh(W g,d G * ) (24) v * = ζ(v i v d ) (25) where W g,i , W g,d ∈ R dg×dg , w i ∈ R dg×1 are model parameters.", "The integrated representations G * = [g 1 ; .", ".", ".", "; g T −1 ] and g T are reused as the final representations of temporal market information.", "The information score v i evaluates historical trading days as per their own information quality, while the dependency score v d captures their dependencies with our main target.", "We integrate the two and acquire the final normalized attention weight v * ∈ R 1×(T −1) by feeding their elementwise product into the softmax function.", "As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesisỹ T as y T = ζ(W T [Ỹ * v * , g T ] + b T ) (26) where W T is a weight matrix and b T is a bias.", "As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq.", "(11) and typically only one sample is used for gradient computation.", "To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈ R T ×1 where f t comprises a likelihood term and a KL term for a trading day t, f t = log p θ (y t |x ≤t , z ≤t ) (27) − λD KL [q φ (z t |z <t , x ≤t , y t ) p θ (z t |z <t , x ≤t )] where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈ (0, 1] to gradually release the KL regularization effect in the training procedure.", "Then we reuse v * to build the final temporal weight vector v ∈ R 1×T , v = [αv * , 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈ [0, 1] to control the overall auxiliary effects on the model training.", "α is tuned on the development set and its effects will be discussed at length in Section 6.5.", "Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N n v (n) f (n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary.", "We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update.", "Experiments In this section, we detail our experimental setup and results.", "Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.", "9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped.", "Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory).", "We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150.", "All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero.", "We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Following Bowman et al.", "(2016), we use the input dropout rate of 0.3 to regularize latent variables.", "Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set.", "Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015) , we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics.", "MCC avoids bias due to data skew.", "Given the confusion matrix tp fn fp tn containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn − fp × fn (tp + fp)(tp + fn)(tn + fp)(tn + fn) .", "(30) Baselines and Proposed Models We construct the following five baselines in different genres, 10 • RAND: a naive predictor making random guess in up or down.", "• ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) .", "• RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016) .", "• TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015) .", "• HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018) .", "To make a detailed analysis of all the primary components in StockNet, in addition to HEDGE-FUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices.", "(Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective.", "Following Zhang et al.", "(2016) , we set z t = µ t to take out the effects of the KL term.", "Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015) .", "We show the performance of the baselines and our proposed models in Table 1 .", "TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy.", "Our model, HEDGEFUNDAN-ALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively.", "Though slightly better than random guess, classic technical analysis, e.g.", "ARIMA, does not yield satisfying results.", "Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA.", "We believe there are two major reasons: (1) TECHNICAL-ANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity.", "It is worth noting that FUNDAMENTALANA-LYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDAN-ALYST.", "The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively.", "As an effective ensemble of the two market information, HEDGE-FUNDANALYST gains even better performance.", "Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANA-LYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction.", "The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary.", "However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section.", "Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance.", "As introduced in Eq.", "(28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model.", "Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α.", "As shown in Figure 4 , enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST achieves its maximum at 0.7.", "In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g.", "affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management.", "Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise.", "In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements.", "Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017) .", "Compared with HEDGEFUND-ANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance.", "Since y * also involves in generating y T through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising.", "Therefore, as shown in Figure 4 , our models do not linearly benefit from incorporating temporal auxiliary.", "In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DIS-CRIMINATIVEANALYST rising up temporarily at 0.3.", "After that, the curves ascend abruptly to their maximums, then keep descending till α = 1.", "Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g.", "INDEPENDENTANALYST.", "Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task.", "We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work.", "Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset." ] }
{ "paper_header_number": [ "1", "2", "3", "5", "5.1", "5.2", "5.3", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7" ], "paper_header_content": [ "Introduction", "Problem Formulation", "Data Collection", "Model Components", "Market Information Encoder", "Variational Movement Decoder", "Attentive Temporal Auxiliary", "Experiments", "Training Setup", "Evaluation Metrics", "Baselines and Proposed Models", "Results", "Effects of Temporal Auxiliary", "Conclusion" ] }
GEM-SciDuet-train-113#paper-1300#slide-22
Appendix Denoising Regularizer
I Objective-level auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous Affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise I Temporal auxiliary tasks help to Filter market sources in the lag as per their respective aligned auxiliary movements Encode more useful information into the latent driven factor Z
I Objective-level auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous Affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise I Temporal auxiliary tasks help to Filter market sources in the lag as per their respective aligned auxiliary movements Encode more useful information into the latent driven factor Z
[]
GEM-SciDuet-train-114#paper-1307#slide-0
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-0
Neural Semantic Parsing NSP
Model used in this work Android_phone_call, Any_phone_call_missed Archive your missed LSTM LSTM <then> Google_drive, Add_row_to_spreadsheet, calls from Android to ((Spreadsheet_name Google Drive missed) (Formatted_row )) (Drivefolder_path IFTTT/Android)) Input Sequence Sequence Logical Utterance Encoder Decoder Form
Model used in this work Android_phone_call, Any_phone_call_missed Archive your missed LSTM LSTM <then> Google_drive, Add_row_to_spreadsheet, calls from Android to ((Spreadsheet_name Google Drive missed) (Formatted_row )) (Drivefolder_path IFTTT/Android)) Input Sequence Sequence Logical Utterance Encoder Decoder Form
[]
GEM-SciDuet-train-114#paper-1307#slide-1
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-1
Confidence Modeling is Important
Most models always tend to guess some outputs We also want to know how confident they are Alexa, buy me something from
Most models always tend to guess some outputs We also want to know how confident they are Alexa, buy me something from
[]
GEM-SciDuet-train-114#paper-1307#slide-2
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-2
Motivation
From the perspective of applications Generate clarification questions to verify the results Nonlinearity of neural networks Unclear for neural models (Johansen and Socher, Lack of explicit lexicons or templates Difficult to trace errors and inconsistencies
From the perspective of applications Generate clarification questions to verify the results Nonlinearity of neural networks Unclear for neural models (Johansen and Socher, Lack of explicit lexicons or templates Difficult to trace errors and inconsistencies
[]
GEM-SciDuet-train-114#paper-1307#slide-3
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-3
Research Goal
Estimate confidence scores for NSP Higher score -> the prediction is more likely correct Which parts of input contribute to uncertain predictions
Estimate confidence scores for NSP Higher score -> the prediction is more likely correct Which parts of input contribute to uncertain predictions
[]
GEM-SciDuet-train-114#paper-1307#slide-4
1307
Confidence Modeling for Neural Semantic Parsing
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231 ], "paper_content_text": [ "Introduction Semantic parsing aims to map natural language text to a formal meaning representation (e.g., logical forms or SQL queries).", "The neural sequenceto-sequence architecture Bahdanau et al., 2015) has been widely adopted in a variety of natural language processing tasks, and semantic parsing is no exception.", "However, despite achieving promising results (Dong and Lapata, 2016; Jia and Liang, 2016; , neural semantic parsers remain difficult to interpret, acting in most cases as a black box, not providing any information about what made them arrive at a particular decision.", "In this work, we explore ways to estimate and interpret the * Work carried out during an internship at Microsoft Research.", "model's confidence in its predictions, which we argue can provide users with immediate and meaningful feedback regarding uncertain outputs.", "An explicit framework for confidence modeling would benefit the development cycle of neural semantic parsers which, contrary to more traditional methods, do not make use of lexicons or templates and as a result the sources of errors and inconsistencies are difficult to trace.", "Moreover, from the perspective of application, semantic parsing is often used to build natural language interfaces, such as dialogue systems.", "In this case it is important to know whether the system understands the input queries with high confidence in order to make decisions more reliably.", "For example, knowing that some of the predictions are uncertain would allow the system to generate clarification questions, prompting users to verify the results before triggering unwanted actions.", "In addition, the training data used for semantic parsing can be small and noisy, and as a result, models do indeed produce uncertain outputs, which we would like our framework to identify.", "A widely-used confidence scoring method is based on posterior probabilities p (y|x) where x is the input and y the model's prediction.", "For a linear model, this method makes sense: as more positive evidence is gathered, the score becomes larger.", "Neural models, in contrast, learn a complicated function that often overfits the training data.", "Posterior probability is effective when making decisions about model output, but is no longer a good indicator of confidence due in part to the nonlinearity of neural networks (Johansen and Socher, 2017) .", "This observation motivates us to develop a confidence modeling framework for sequenceto-sequence models.", "We categorize the causes of uncertainty into three types, namely model uncertainty, data uncertainty, and input uncertainty and design different metrics to characterize them.", "We compute these confidence metrics for a given prediction and use them as features in a regression model which is trained on held-out data to fit prediction F1 scores.", "At test time, the regression model's outputs are used as confidence scores.", "Our approach does not interfere with the training of the model, and can be thus applied to various architectures, without sacrificing test accuracy.", "Furthermore, we propose a method based on backpropagation which allows to interpret model behavior by identifying which parts of the input contribute to uncertain predictions.", "Experimental results on two semantic parsing datasets (IFTTT, Quirk et al.", "2015; and DJANGO, Oda et al.", "2015) show that our model is superior to a method based on posterior probability.", "We also demonstrate that thresholding confidence scores achieves a good trade-off between coverage and accuracy.", "Moreover, the proposed uncertainty backpropagation method yields results which are qualitatively more interpretable compared to those based on attention scores.", "Related Work Confidence Estimation Confidence estimation has been studied in the context of a few NLP tasks, such as statistical machine translation (Blatz et al., 2004; Ueffing and Ney, 2005; Soricut and Echihabi, 2010) , and question answering (Gondek et al., 2012) .", "To the best of our knowledge, confidence modeling for semantic parsing remains largely unexplored.", "A common scheme for modeling uncertainty in neural networks is to place distributions over the network's weights (Denker and Lecun, 1991; MacKay, 1992; Neal, 1996; Blundell et al., 2015; Gan et al., 2017) .", "But the resulting models often contain more parameters, and the training process has to be accordingly changed, which makes these approaches difficult to work with.", "Gal and Ghahramani (2016) develop a theoretical framework which shows that the use of dropout in neural networks can be interpreted as a Bayesian approximation of Gaussian Process.", "We adapt their framework so as to represent uncertainty in the encoder-decoder architectures, and extend it by adding Gaussian noise to weights.", "Semantic Parsing Various methods have been developed to learn a semantic parser from natural language descriptions paired with meaning representations (Tang and Mooney, 2000; Zettlemoyer and Collins, 2007; Lu et al., 2008; Kwiatkowski et al., 2011; Andreas et al., 2013; Zhao and Huang, 2015) .", "More recently, a few sequence-to-sequence models have been proposed for semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016; and shown to perform competitively whilst eschewing the use of templates or manually designed features.", "There have been several efforts to improve these models including the use of a tree decoder (Dong and Lapata, 2016) , data augmentation (Jia and Liang, 2016; , the use of a grammar model (Xiao et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017; , coarse-tofine decoding (Dong and Lapata, 2018) , network sharing (Susanto and Lu, 2017; Herzig and Berant, 2017) , user feedback (Iyer et al., 2017) , and transfer learning (Fan et al., 2017) .", "Current semantic parsers will by default generate some output for a given input even if this is just a random guess.", "System results can thus be somewhat unexpected inadvertently affecting user experience.", "Our goal is to mitigate these issues with a confidence scoring model that can estimate how likely the prediction is correct.", "Neural Semantic Parsing Model In the following section we describe the neural semantic parsing model (Dong and Lapata, 2016; Jia and Liang, 2016; we assume throughout this paper.", "The model is built upon the sequence-to-sequence architecture and is illustrated in Figure 1 .", "An encoder is used to encode natural language input q = q 1 · · · q |q| into a vector representation, and a decoder learns to generate a logical form representation of its meaning a = a 1 · · · a |a| conditioned on the encoding vectors.", "The encoder and decoder are two different recurrent neural networks with long short-term memory units (LSTMs; Hochreiter and Schmidhuber 1997) which process tokens sequentially.", "The probability of generating the whole sequence p (a|q) is factorized as: p (a|q) = |a| t=1 p (a t |a <t , q) (1) where a <t = a 1 · · · a t−1 .", "Let e t ∈ R n denote the hidden vector of the encoder at time step t. It is computed via e t = f LSTM (e t−1 , q t ), where f LSTM refers to the LSTM unit, and q t ∈ R n is the word embedding … … … <s> … … … i) iii) i) ii) iv) Figure 1: We use dropout as approximate Bayesian inference to obtain model uncertainty.", "The dropout layers are applied to i) token vectors; ii) the encoder's output vectors; iii) bridge vectors; and iv) decoding vectors.", "of q t .", "Once the tokens of the input sequence are encoded into vectors, e |q| is used to initialize the hidden states of the first time step in the decoder.", "Similarly, the hidden vector of the decoder at time step t is computed by d t = f LSTM (d t−1 , a t−1 ), where a t−1 ∈ R n is the word vector of the previously predicted token.", "Additionally, we use an attention mechanism (Luong et al., 2015a) to utilize relevant encoder-side context.", "For the current time step t of the decoder, we compute its attention score with the k-th hidden state in the encoder as: r t,k ∝ exp{d t · e k } (2) where |q| j=1 r t,j = 1.", "The probability of generating a t is computed via: c t = |q| k=1 r t,k e k (3) d att t = tanh (W 1 d t + W 2 c t ) (4) p (a t |a <t , q) = softmax at W o d att t (5) where W 1 , W 2 ∈ R n×n and W o ∈ R |Va|×n are three parameter matrices.", "The training objective is to maximize the likelihood of the generated meaning representation a given input q, i.e., maximize (q,a)∈D log p (a|q), where D represents training pairs.", "At test time, the model's prediction for input q is obtained viâ a = arg max a p (a |q), where a represents candidate outputs.", "Because p (a|q) is factorized as shown in Equation (1), we can use beam search to generate tokens one by one rather than iterating over all possible results.", "Confidence Estimation Given input q and its predicted meaning representation a, the confidence model estimates Algorithm 1 Dropout Perturbation Input: q, a: Input and its prediction M: Model parameters 1: for i ← 1, · · · , F do 2:M i ← Apply dropout layers to M Figure 1 3: Run forward pass and computep(a|q;M i ) 4: Compute variance of {p(a|q;M i )} F i=1 Equation (6) score s (q, a) ∈ (0, 1).", "A large score indicates the model is confident that its prediction is correct.", "In order to gauge confidence, we need to estimate \"what we do not know\".", "To this end, we identify three causes of uncertainty, and design various metrics characterizing each one of them.", "We then feed these metrics into a regression model in order to predict s (q, a).", "Model Uncertainty The model's parameters or structures contain uncertainty, which makes the model less confident about the values of p (a|q).", "For example, noise in the training data and the stochastic learning algorithm itself can result in model uncertainty.", "We describe metrics for capturing uncertainty below: Dropout Perturbation Our first metric uses dropout (Srivastava et al., 2014) as approximate Bayesian inference to estimate model uncertainty (Gal and Ghahramani, 2016) .", "Dropout is a widely used regularization technique during training, which relieves overfitting by randomly masking some input neurons to zero according to a Bernoulli distribution.", "In our work, we use dropout at test time, instead.", "As shown in Algorithm 1, we perform F forward passes through the network, and collect the results {p(a|q; M i )} F i=1 whereM i represents the perturbed parameters.", "Then, the uncertainty metric is computed by the variance of results.", "We define the metric on the sequence level as: var{p(a|q;M i )} F i=1 .", "(6) In addition, we compute uncertainty u at at the token-level a t via: u at = var{p(a t |a <t , q;M i )} F i=1 (7) wherep(a t |a <t , q;M i ) is the probability of generating token a t (Equation (5) ) using perturbed modelM i .", "We operationalize tokenlevel uncertainty in two ways, as the average score avg{u at } |a| t=1 and the maximum score max{u at } |a| t=1 (since the uncertainty of a sequence is often determined by the most uncertain token).", "As shown in Figure 1 , we add dropout layers in i) the word vectors of the encoder and decoder q t , a t ; ii) the output vectors of the encoder e t ; iii) bridge vectors e |q| used to initialize the hidden states of the first time step in the decoder; and iv) decoding vectors d att t (Equation (4) ).", "Gaussian Noise Standard dropout can be viewed as applying noise sampled from a Bernoulli distribution to the network parameters.", "We instead use Gaussian noise, and apply the metrics in the same way discussed above.", "Let v denote a vector perturbed by noise, and g a vector sampled from the Gaussian distribution N (0, σ 2 ).", "We usev = v + g andv = v + v g as two noise injection methods.", "Intuitively, if the model is more confident in an example, it should be more robust to perturbations.", "Posterior Probability Our last class of metrics is based on posterior probability.", "We use the log probability log p(a|q) as a sequence-level metric.", "The token-level metric min{p(a t |a <t , q)} |a| t=1 can identify the most uncertain predicted token.", "The perplexity per token − 1 |a| |a| t=1 log p (a t |a <t , q) is also employed.", "Data Uncertainty The coverage of training data also affects the uncertainty of predictions.", "If the input q does not match the training distribution or contains unknown words, it is difficult to predict p (a|q) reliably.", "We define two metrics: Probability of Input We train a language model on the training data, and use it to estimate the probability of input p(q|D) where D represents the training data.", "Number of Unknown Tokens Tokens that do not appear in the training data harm robustness, and lead to uncertainty.", "So, we use the number of unknown tokens in the input q as a metric.", "Input Uncertainty Even if the model can estimate p (a|q) reliably, the input itself may be ambiguous.", "For instance, the input the flight is at 9 o'clock can be interpreted as either flight time(9am) or flight time(9pm).", "Selecting between these predictions is difficult, especially if they are both highly likely.", "We use the following metrics to measure uncertainty caused by ambiguous inputs.", "Variance of Top Candidates We use the variance of the probability of the top candidates to indicate whether these are similar.", "The sequencelevel metric is computed by: var{p(a i |q)} K i=1 where a 1 .", ".", ".", "a K are the K-best predictions obtained by the beam search during inference (Section 3).", "Entropy of Decoding The sequence-level entropy of the decoding process is computed via: H[a|q] = − a p(a |q) log p(a |q) which we approximate by Monte Carlo sampling rather than iterating over all candidate predictions.", "The token-level metrics of decoding entropy are computed by avg{H[a t |a <t , q]} |a| t=1 and max{H[a t |a <t , q]} |a| t=1 .", "Confidence Scoring The sentence-and token-level confidence metrics defined in Section 4 are fed into a gradient tree boosting model (Chen and Guestrin, 2016) in order to predict the overall confidence score s (q, a).", "The model is wrapped with a logistic function so that confidence scores are in the range of (0, 1).", "Because the confidence score indicates whether the prediction is likely to be correct, we can use the prediction's F1 (see Section 6.2) as target value.", "The training loss is defined as: (q,a)∈D ln(1+e −ŝ(q,a) ) yq,a + ln(1+eŝ (q,a) ) (1−yq,a) where D represents the data, y q,a is the target F1 score, andŝ(q, a) the predicted confidence score.", "We refer readers to Chen and Guestrin (2016) for mathematical details of how the gradient tree boosting model is trained.", "Notice that we learn the confidence scoring model on the held-out set (rather than on the training data of the semantic parser) to avoid overfitting.", "Uncertainty Interpretation Confidence scores are useful in so far they can be traced back to the inputs causing the uncertainty in the first place.", "For semantic parsing, identifying = v c 1 m u c 1 + v c 2 m u c 2 .", "The score u m is then redistributed to its parent neurons p 1 and p 2 , which satisfies v m p 1 + v m p 2 = 1. which input words contribute to uncertainty would be of value, e.g., these could be treated explicitly as special cases or refined if they represent noise.", "In this section, we introduce an algorithm that backpropagates token-level uncertainty scores (see Equation (7) ) from predictions to input tokens, following the ideas of Bach et al.", "(2015) and Zhang et al.", "(2016) .", "Let u m denote neuron m's uncertainty score, which indicates the degree to which it contributes to uncertainty.", "As shown in Figure 2 , u m is computed by the summation of the scores backpropagated from its child neurons: u m = c∈Child(m) v c m u c where Child(m) is the set of m's child neurons, and the non-negative contribution ratio v c m indicates how much we backpropagate u c to neuron m. Intuitively, if neuron m contributes more to c's value, ratio v c m should be larger.", "After obtaining score u m , we redistribute it to its parent neurons in the same way.", "Contribution ratios from m to its parent neurons are normalized to 1: p∈Parent(m) v m p = 1 where Parent(m) is the set of m's parent neurons.", "Given the above constraints, we now define different backpropagation rules for the operators used in neural networks.", "We first describe the rules used for fully-connected layers.", "Let x denote the input.", "The output is computed by z = σ(Wx+b), where σ is a nonlinear function, W ∈ R |z| * |x| is the weight matrix, b ∈ R |z| is the bias, and neuron z i is computed via z i = σ( |x| j=1 W i,j x j + b i ).", "Neuron x k 's uncertainty score u x k is gath-Algorithm 2 Uncertainty Interpretation Input: q, a: Input and its prediction Output: {ûq t } |q| t=1 : Interpretation scores for input tokens Function: TokenUnc: Get token-level uncertainty 1: Get token-level uncertainty for predicted tokens 2: {ua t } |a| t=1 ← TokenUnc(q, a) 3: Initialize uncertainty scores for backpropagation 4: for t ← 1, · · · , |a| do 5: Decoder classifier's output neuron ← ua t 6: Run backpropagation 7: for m ← neuron in backward topological order do 8: Gather scores from child neurons 9: um ← c∈Child(m) v c m uc 10: Summarize scores for input words 11: for t ← 1, · · · , |q| do 12: uq t ← c∈q t uc 13: {ûq t } |q| t=1 ← normalize {uq t } |q| t=1 ered from the next layer: u x k = |z| i=1 v z i x k u z i = |z| i=1 |W i,k x k | |x| j=1 |W i,j x j | u z i ignoring the nonlinear function σ and the bias b.", "The ratio v z i x k is proportional to the contribution of x k to the value of z i .", "We define backpropagation rules for elementwise vector operators.", "For z = x ± y, these are: u x k = |x k | |x k |+|y k | u z k u y k = |y k | |x k |+|y k | u z k where the contribution ratios v z k x k and v z k y k are determined by |x k | and |y k |.", "For multiplication, the contribution of two elements in 1 3 * 3 should be the same.", "So, the propagation rules for z = x y are: u x k = | log |x k || | log |x k ||+| log |y k || u z k u y k = | log |y k || | log |x k ||+| log |y k || u z k where the contribution ratios are determined by | log |x k || and | log |y k ||.", "For scalar multiplication, z = λx where λ denotes a constant.", "We directly assign z's uncertainty scores to x and the backpropagation rule is u x k = u z k .", "As shown in Algorithm 2, we first initialize uncertainty backpropagation in the decoder (lines 1-5).", "For each predicted token a t , we compute its uncertainty score u at as in Equation (7) .", "Next, we find the dimension of a t in the decoder's softmax classifier (Equation (5) ), and initialize the neuron with the uncertainty score u at .", "We then backpropagate these uncertainty scores through Dataset Example IFTTT turn android phone to full volume at 7am monday to friday date time−every day of the week at−((time of day (07)(:)(00)) (days of the week (1)(2)(3)(4)(5))) THEN android device−set ringtone volume−(volume ({' volume level':1.0,'name':'100%'})) DJANGO for every key in sorted list of user settings for key in sorted(user settings): the network (lines 6-9), and finally into the neurons of the input words.", "We summarize them and compute the token-level scores for interpreting the results (line 10-13).", "For input word vector q t , we use the summation of its neuron-level scores as the token-level score:û qt ∝ c∈qt u c where c ∈ q t represents the neurons of word vector q t , and |q| t=1û qt = 1.", "We use the normalized scoreû qt to indicate token q t 's contribution to prediction uncertainty.", "Experiments In this section we describe the datasets used in our experiments and various details concerning our models.", "We present our experimental results and analysis of model behavior.", "Our code is publicly available at https://github.com/ donglixp/confidence.", "Datasets We trained the neural semantic parser introduced in Section 3 on two datasets covering different domains and meaning representations.", "Examples are shown in Table 1 .", "IFTTT This dataset (Quirk et al., 2015) contains a large number of if-this-then-that programs crawled from the IFTTT website.", "The programs are written for various applications, such as home security (e.g., \"email me if the window opens\"), and task automation (e.g., \"save instagram photos to dropbox\").", "Whenever a program's trigger is satisfied, an action is performed.", "Triggers and actions represent functions with arguments; they are selected from different channels (160 in total) representing various services (e.g., Android).", "There are 552 trigger functions and 229 action functions.", "The original split contains 77, 495 training, 5, 171 development, and 4, 294 test instances.", "The subset that removes non-English descriptions was used in our experiments.", "DJANGO This dataset (Oda et al., 2015) is built upon the code of the Django web framework.", "Each line of Python code has a manually annotated natural language description.", "Our goal is to map the English pseudo-code to Python statements.", "This dataset contains diverse use cases, such as iteration, exception handling, and string manipulation.", "The original split has 16, 000 training, 1, 000 development, and 1, 805 test examples.", "Settings We followed the data preprocessing used in previous work (Dong and Lapata, 2016; Yin and Neubig, 2017) .", "Input sentences were tokenized using NLTK (Bird et al., 2009) and lowercased.", "We filtered words that appeared less than four times in the training set.", "Numbers and URLs in IFTTT and quoted strings in DJANGO were replaced with place holders.", "Hyperparameters of the semantic parsers were validated on the development set.", "The learning rate and the smoothing constant of RMSProp (Tieleman and Hinton, 2012) were 0.002 and 0.95, respectively.", "The dropout rate was 0.25.", "A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for DJANGO.", "Dimensions for the word embedding and hidden vector were selected from {150, 250}.", "The beam size during decoding was 5.", "For IFTTT, we view the predicted trees as a set of productions, and use balanced F1 as evaluation metric (Quirk et al., 2015) .", "We do not measure accuracy because the dataset is very noisy and there rarely is an exact match between the predicted output and the gold standard.", "The F1 score of our neural semantic parser is 50.1%, which is comparable to Dong and Lapata (2016) .", "For DJANGO, we measure the fraction of exact matches, where F1 score is equal to accuracy.", "Because there are unseen variable names at test time, we use attention scores as alignments to replace unknown to- Table 2 : Spearman ρ correlation between confidence scores and F1.", "Best results are shown in bold.", "All correlations are significant at p < 0.01. kens in the prediction with the input words they align to (Luong et al., 2015b) .", "The accuracy of our parser is 53.7%, which is better than the result (45.1%) of the sequence-to-sequence model reported in Yin and Neubig (2017) .", "To estimate model uncertainty, we set dropout rate to 0.1, and performed 30 inference passes.", "The standard deviation of Gaussian noise was 0.05.", "The language model was estimated using KenLM (Heafield et al., 2013) .", "For input uncertainty, we computed variance for the 10-best candidates.", "The confidence metrics were implemented in batch mode, to take full advantage of GPUs.", "Hyperparameters of the confidence scoring model were cross-validated.", "The number of boosted trees was selected from {20, 50}.", "The maximum tree depth was selected from {3, 4, 5}.", "We set the subsample ratio to 0.8.", "All other hyperparameters in XGBoost (Chen and Guestrin, 2016) were left with their default values.", "Results Confidence Estimation We compare our approach (CONF) against confidence scores based on posterior probability p(a|q) (POSTERIOR).", "We also report the results of three ablation variants (−MODEL, −DATA, −INPUT) by removing each group of confidence metrics described in Section 4.", "We measure the relationship between confidence scores and F1 using Spearman's ρ correlation coefficient which varies between −1 and 1 (0 implies there is no correlation).", "High ρ indicates that the confidence scores are high for correct predictions and low otherwise.", "As shown in Table 2 , our method CONF outperforms POSTERIOR by a large margin.", "The ablation results indicate that model uncertainty plays the most important role among the confidence metrics.", "In contrast, removing the metrics of data uncertainty affects performance less, because most examples in the datasets are in-domain.", "Improve- Table 3 .", "ments for each group of metrics are significant with p < 0.05 according to bootstrap hypothesis testing (Efron and Tibshirani, 1994) .", "Tables 3 and 4 show the correlation matrix for F1 and individual confidence metrics on the IFTTT and DJANGO datasets, respectively.", "As can be seen, metrics representing model uncertainty and input uncertainty are more correlated to each other compared with metrics capturing data uncertainty.", "Perhaps unsurprisingly metrics of the same group are highly inter-correlated since they model the same type of uncertainty.", "Table 5 shows the relative importance of individual metrics in the regression model.", "As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016) .", "The results indicate that model uncertainty (Noise/Dropout/Posterior/Perplexity) plays Table 5 : Importance scores of confidence metrics (normalized by maximum value on each dataset).", "Best results are shown in bold.", "Same shorthands apply as in Table 3. the most important role.", "On IFTTT, the number of unknown tokens (#UNK) and the variance of top candidates (var(K-best)) are also very helpful because this dataset is relatively noisy and contains many ambiguous inputs.", "Finally, in real-world applications, confidence scores are often used as a threshold to trade-off precision for coverage.", "Figure 3 shows how F1 score varies as we increase the confidence threshold, i.e., reduce the proportion of examples that we return answers for.", "F1 score improves monotonically for POSTERIOR and our method, which, however, achieves better performance when coverage is the same.", "Uncertainty Interpretation We next evaluate how our backpropagation method (see Section 5) allows us to identify input tokens contributing to uncertainty.", "We compare against a method that interprets uncertainty based on the attention mechanism (ATTENTION).", "As shown in Equation (2) , attention scores r t,k can be used as soft alignments between the time step t of the decoder and the k-th input token.", "We compute the normalized uncertainty scoreû qt for a token q t via: u qt ∝ |a| t=1 r t,k u at (8) where u at is the uncertainty score of the predicted token a t (Equation (7) ), and |q| t=1û qt = 1.", "Unfortunately, the evaluation of uncertainty interpretation methods is problematic.", "For our semantic parsing task, we do not a priori know which tokens in the natural language input contribute to uncertainty and these may vary depending on the architecture used, model parameters, and so on.", "We work around this problem by creating a proxy gold standard.", "We inject noise to the vectors representing tokens in the encoder (see Section 4.1) and then estimate the uncertainty caused by each token q t (Equation (6) addition of noise should only affect genuinely uncertain tokens.", "Notice that here we inject noise to one token at a time 1 instead of all parameters (see Figure 1 ).", "Tokens identified as uncertain by the above procedure are considered gold standard and compared to those identified by our method.", "We use Gaussian noise to perturb vectors in our experiments (dropout obtained similar results).", "We define an evaluation metric based on the overlap (overlap@K) among tokens identified as uncertain by the model and the gold standard.", "Given an example, we first compute the interpretation scores of the input tokens according to our method, and obtain a list τ 1 of K tokens with highest scores.", "We also obtain a list τ 2 of K tokens with highest ground-truth scores and measure the degree of overlap between these two lists: overlap@K = |τ 1 ∩ τ 2 | K Method IFTTT DJANGO @2 @4 @2 @4 ATTENTION 0.525 0.737 0.637 0.684 BACKPROP 0.608 0.791 0.770 0.788 Table 6 : Uncertainty interpretation against inferred ground truth; we compute the overlap between tokens identified as contributing to uncertainty by our method and those found in the gold standard.", "Overlap is shown for top 2 and 4 tokens.", "Best results are in bold.", "google calendar−any event starts THEN facebook −create a status message−(status message ({description})) ATT post calendar event to facebook BP post calendar event to facebook feed−new feed item−(feed url( url sports.espn.go.com)) THEN ... ATT espn mlb headline to readability BP espn mlb headline to readability weather−tomorrow's low drops below−(( temperature(0)) (degrees in(c))) THEN ... ATT warn me when it's going to be freezing tomorrow BP warn me when it's going to be freezing tomorrow if str number[0] == ' STR ': ATT if first element of str number equals a string STR .", "BP if first element of str number equals a string STR .", "start = 0 ATT start is an integer 0 .", "BP start is an integer 0 .", "if name.startswith(' STR '): ATT if name starts with an string STR , BP if name starts with an string STR , Table 7 : Uncertainty interpretation for ATTEN-TION (ATT) and BACKPROP (BP) .", "The first line in each group is the model prediction.", "Predicted tokens and input words with large scores are shown in red and blue, respectively.", "where K ∈ {2, 4} in our experiments.", "For example, the overlap@4 metric of the lists τ 1 = [q 7 , q 8 , q 2 , q 3 ] and τ 2 = [q 7 , q 8 , q 3 , q 4 ] is 3/4, because there are three overlapping tokens.", "Table 6 reports results with overlap@2 and overlap@4.", "Overall, BACKPROP achieves better interpretation quality than the attention mechanism.", "On both datasets, about 80% of the top-4 tokens identified as uncertain agree with the ground truth.", "Table 7 shows examples where our method has identified input tokens contributing to the uncertainty of the output.", "We highlight token a t if its uncertainty score u at is greater than 0.5 * avg{u a t } |a| t =1 .", "The results illustrate that the parser tends to be uncertain about tokens which are function arguments (e.g., URLs, and message content), and ambiguous inputs.", "The examples show that BACKPROP is qualitatively better compared to ATTENTION; attention scores often produce inaccurate alignments while BACKPROP can utilize information flowing through the LSTMs rather than only relying on the attention mechanism.", "Conclusions In this paper we presented a confidence estimation model and an uncertainty interpretation method for neural semantic parsing.", "Experimental results show that our method achieves better performance than competitive baselines on two datasets.", "Directions for future work are many and varied.", "The proposed framework could be applied to a variety of tasks (Bahdanau et al., 2015; Schmaltz et al., 2017) employing sequence-to-sequence architectures.", "We could also utilize the confidence estimation model within an active learning framework for neural semantic parsing." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5", "6", "6.1", "6.2", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Neural Semantic Parsing Model", "Confidence Estimation", "Model Uncertainty", "Data Uncertainty", "Input Uncertainty", "Confidence Scoring", "Uncertainty Interpretation", "Experiments", "Datasets", "Settings", "Conclusions" ] }
GEM-SciDuet-train-114#paper-1307#slide-4
Confidence Estimation Overview
Indicate whether the prediction is likely to be correct Confidence Metrics Characterize causes of uncertainty Archive your missed calls from Android to Google Drive LSTM LSTM Android_phone_call, Any_phone_call_missed <then> Google_drive, Add_row_to_spreadsheet, ((Spreadsheet_name missed) (Formatted_row )) (Drivefolder_path IFTTT/Android)) Input Sequence Sequence Logical Utterance Encoder Deco der Form
Indicate whether the prediction is likely to be correct Confidence Metrics Characterize causes of uncertainty Archive your missed calls from Android to Google Drive LSTM LSTM Android_phone_call, Any_phone_call_missed <then> Google_drive, Add_row_to_spreadsheet, ((Spreadsheet_name missed) (Formatted_row )) (Drivefolder_path IFTTT/Android)) Input Sequence Sequence Logical Utterance Encoder Deco der Form
[]