text
stringlengths
0
164k
<s>[INST] Named Entity Recognition on Twitter for Turkish using Semi-supervised Learning with Word Embeddings [/INST] Microblogging environments which allow users to post short messages have gained increased popularity in the last decade Twitter which is one of the most popular microblogging platforms has become an interesting platform for exchanging ideas following recent developments and trends or discussing any possible topic Since Twitter has an enormously wide range of users with varying interests and sharing preferences a significant amount of content is being created rapidly Therefore mining such platforms can extract valuable information As a consequence extracting information from Twitter has become a hot topic of research For Twitter text mining one popular research area is opinion mining or sentiment analysis which is surely useful for companies or political parties to gather information about their services and products BIBREF0 Another popular research area is content analysis or more specifically topic modeling which is useful for text classification and filtering applications on Twitter BIBREF1 Moreover event monitoring and trend analysis are also other examples of useful application areas on microblog texts BIBREF2 In order to build successful social media analysis applications it is necessary to employ successful processing tools for Natural Language Processing NLP tasks such as Named Entity Recognition NER NER is a critical stage for various NLP applications including machine translation question answering and opinion mining The aim of NER is to classify and locate atomic elements in a given text into predefined categories like the names of the persons locations and organizations PLOs NER on wellwritten texts is accepted as a solved problem for wellstudied languages like English However it still needs further work for morphologically rich languages like Turkish due to their complex structure and relatively scarce language processing tools and data sets BIBREF3 In addition most of the NER systems are designed for formal texts The performance of such systems drops significantly when applied on informal texts To illustrate the stateoftheart Turkish NER system has CoNLL Fscore of 9194 on news data but the performance drops to Fscore of 1928 when this system is adopted to Twitter data BIBREF4 There are several challenges for NER on tweets which are also summarized in Kucuk20141 due to the very short text length and informal structure of the language used Missing proper grammar rules and punctuation lack of capitalization and apostrophes usage of hashtags abbreviations and slang words are some of those challenges In Twitter using contracted forms and metonymic expressions instead of full organization or location names is very common as well The usage of nondiacritic characters and the limited annotated data bring additional challenges for processing Turkish tweets Due to the dynamic language used in Twitter heavy feature engineering is not feasible for Twitter NER Demir2014 developed a semisupervised approach for Turkish NER on formal newswire text using word embeddings obtained from unlabeled data They obtained promising results without using any gazetteers and language dependent features We adopted this approach for informal texts and evaluated it on Turkish tweets where we achieved the stateoftheart Fscore performance Our results show that using word embeddings for Twitter NER in Turkish can result in better Fscore performance compared to using text normalization as a preprocessing step In addition utilizing indomain word embeddings can be a promising approach for Twitter NER There are various important studies of NER on Twitter for English Ritter2011 presented a twophase NER system for tweets TNER using Conditional Random Fields CRF and including tweetspecific features Liu2011 proposed a hybrid NER approach based on KNearest Neighbors and linear CRF Liu2012 presented a factor graphbased method for NER on Twitter Li2012 described an unsupervised approach for tweets called TwiNER Bontcheva2013 described an NLP pipeline for tweets called TwitIE Very recently Cherry2015 have shown the effectiveness of Brown clusters and word vectors on Twitter NER for English For Turkish NER on formal texts Tur2003 presented the first study with a Hidden Markov Model based approach Tatar2011 presented an automatic rule learning system Yeniterzi2011 used CRF for Turkish NER and Kucuk2012 proposed a hybrid approach A CRFbased model by Seker2012 is the stateoftheart Turkish NER system with CoNLL Fscore of 9194 using gazetteers Demir2014 achieved a similar Fscore of 9185 without gazetteers and language dependent features using a semisupervised model with word embeddings For Turkish NER on Twitter Celikkaya2013 presented the first study by adopting the CRFbased NER of Seker2012 with a text normalizer Kucuk20141 adopted a multilingual rulebased NER by extending the resources for Turkish Kucuk20142 adopted a rulebased approach for Turkish tweets where diacriticsbased expansion to lexical resources and relaxing the capitalization yielded an Fscore of 48 with strict CoNLLlike metric To build a NER model with a semisupervised learning approach on Turkish tweets we used a neural network based architecture consisting of unsupervised and supervised stages In the unsupervised stage our aim is to learn distributed word representations or word embeddings in continuous vector space where semantically similar words are expected to be close to each other Word vectors trained on large unlabeled Turkish corpus can provide additional knowledge base for NER systems trained with limited amount of labeled data in the supervised stage A word representation is usually a vector associated with each word where each dimension represents a feature The value of each dimension is defined to be representing the amount of activity for that specific feature A distributed representation represents each word as a dense vector of continuous values By having lower dimensional dense vectors and by having real values at each dimension distributed word representations are helpful to solve the sparsity problem Distributed word representations are trained with a huge unlabeled corpus using unsupervised learning If this unlabeled corpus is large enough then we expect that the distributed word representations will capture the syntactic and semantic properties of each word and this will provide a mechanism to obtain similar representations for semantically and syntactically close words Vector space distributed representations of words are helpful for learning algorithms to reach better results in many NLP tasks since they provide a method for grouping similar words together The idea of using distributed word representations in vector space is applied to statistical language modeling for the first time by using a neural network based approach with a significant success by Bengio2003 The approach is based on learning a distributed representation of each word where each dimension of such a word embedding represents a hidden feature of this word and is used to capture the words semantic and grammatical properties Later on Collobert2011 proposed to use distributed word representations together with the supervised neural networks and achieved stateofthe art results in different NLP tasks including NER for English We used the public tool word2vec released by Mikolov2013 to obtain the word embeddings Their neural network approach is similar to the feedforward neural networks BIBREF5 BIBREF6 To be more precise the previous words to the current word are encoded in the input layer and then projected to the projection layer with a shared projection matrix After that the projection is given to the nonlinear hidden layer and then the output is given to softmax in order to receive a probability distribution over all the words in the vocabulary However as suggested by Mikolov2013 removing the nonlinear hidden layer and making the projection layer shared by all words is much faster which allowed us to use a larger unlabeled corpus and obtain better word embeddings Among the methods presented in Mikolov2013 we used the continuous Skipgram model to obtain semantic representations of Turkish words The Skipgram model uses the current word as an input to the projection layer with a loglinear classifier and attempts to predict the representation of neighboring words within a certain range In the Skipgram model architecture we used we have chosen 200 as the dimension of the obtained word vectors The range of surrounding words is chosen to be 5 so that we will predict the distributed representations of the previous 2 words and the next 2 words using the current word Our vector size and range decisions are aligned with the choices made in the previous study for Turkish NER by Demir2014 The Skipgram model architecture we used is shown in Figure FIGREF3 In this stage a comparably smaller amount of labeled data is used for training the final NER models We used the publicly available neural network implementation by Turian2010 which actually follows the study by Ratinov2009 where a regularized averaged multiclass perceptron is used Note that although nonlocal features are proven to be useful for the NER task on formal text types such as news articles their usage and benefit is questionable for informal and short text types Due to the fact that each tweet is treated as a single document with only 140 characters it is difficult to make use of nonlocal features such as context aggregation and prediction history for the NER task on tweets On the other hand local features are mostly related to the previous and next tokens of the current token With this motivation we explored both local and nonlocal features but observed that we achieve better results without nonlocal features As a result to construct our NER model on tweets we used the following local features Context All tokens in the current window of size two Capitalization Boolean feature indicating whether the first character of a token is uppercase or not This feature is generated for all the tokens in the current window Previous tags Named entity tag predictions of the previous two tokens Word type information Type information of tokens in the current window ie allcapitalized iscapitalized alldigits containsapostrophe and isalphanumeric Token prefixes First characters with length three and four if exists of current token Token suffixes Last characters with length one to four if exists of current token Word embeddings Vector representations of words in the current window In addition to tailoring the features used by Ratinov2009 for tweets there are other Twitterspecific aspects of our NER system such as using word embeddings trained on an unlabeled tweet corpus applying normalization on labeled tweets and extracting Twitterspecific keywords like hashtags mentions smileys and URLs from both labeled and unlabeled Turkish tweets For text normalization as a preprocessing step of our system we used the Turkish normalization interface developed for social media text with ill formed word detection and candidate word generation BIBREF8 Along with the features used the representation scheme for named entities is also important in terms of performance for a NER system Two popular such encoding schemes are BIO and BILOU The BIO scheme identifies the Beginning the Inside and the Outside of the named entities whereas the BILOU scheme identifies the Beginning the Inside and the Last tokens of multitoken named entities plus the Outside if it is not a named entity and the Unit length if the entity has single token Since it is shown by Ratinov2009 that BILOU representation scheme significantly outperforms the BIO encoding scheme we make use of BILOU encoding for tagging named entities in our study Furthermore we applied normalization to numerical expressions as described in Turian2010 which helps to achieve a degree of abstraction to numerical expressions In the unsupervised stage we used two types of unlabeled data to obtain Turkish word embeddings The first one is a Turkish newsweb corpus containing 423M words and 491M tokens namely the BOUN Web Corpus BIBREF9 BIBREF10 The second one is composed of 21M Turkish tweets with 241M words and 293M tokens where we combined 1M tweets from TS TweetS by Sezer2013 and 20M Turkish Tweets by Bolat and Amasyal We applied tokenization on both Turkish newsweb corpus and Turkish tweets corpus using the publicly available Zemberek tool developed for Turkish We have also applied lowercasing on both corpora in order to limit the number of unique words Since our combined tweets corpus is composed of Twitterspecific texts we applied what we call Twitter processing where we replaced mentions hashtags smileys and URLs with certain keywords In the supervised stage we used two types of labeled data to train and test our NER models The first one is Turkish news data annotated with ENAMEXtype named entities or PLOs BIBREF11 It includes 14481 person 9409 location and 9034 organization names in the training partition of 450K words This data set is popularly used for performance evaluation of NER systems for Turkish including the ones presented by Seker2012 by Yeniterzi2011 and by Demir2014 The second type of labeled data is annotated Turkish tweets where we used two different sets The first set TwitterDS1 has around 5K tweets with 54K tokens and 1336 annotated PLOs BIBREF4 The second set TwitterDS2 which is publicly available has 2320 tweets with around 21K tokens and 980 PLOs in total BIBREF12 The counts for each of the ENAMEXtype named entities for these Turkish Twitter data sets are provided in Table TABREF21 We designed a number of experimental settings to investigate their effects on Turkish Twitter NER These settings are as follows the text type of annotated data used for training the text type of unlabeled data used to learn the word embeddings using the capitalization feature or not and applying text normalization We evaluated all models on ENAMEX types with the CoNLL metric and reported phraselevel overall Fscore performance results To be more precise the Fscore values presented in Table TABREF23 Table TABREF26 and Table TABREF27 are microaveraged over the classes using the strict metric Most of our NER models are trained on annotated Turkish news data by Tur2003 and tested on tweets due to the limited amount of annotated Turkish tweets In addition to using TwitterDS1 and TwitterDS2 as test sets we detected 291 completely nonTurkish tweets out of 5040 in TwitterDS1 and filtered them out using the isTurkish tool BIBREF13 to obtain TwitterDS1FT We also used the normalized versions of these data sets As shown in Table TABREF23 turning off the capitalization feature is better when text normalization is not applied bold entries but the best results are achieved when normalization is applied and the capitalization feature is used underlined bold entries To observe the effects of the type of the source text used to learn the word embeddings we have three models as Web Twt and WebTwt where we used the Turkish web corpus tweet corpus and their combination respectively to learn the word embeddings Including indomain data from a relatively smaller tweet corpus together with a larger web corpus yields in better Twitter NER performance We examined the effects of word embeddings on the performance of our NER models and compared them to the improvements achieved by applying normalization on Turkish tweets The baseline NER model is built by using the features explained in section 32 except the capitalization and word embeddings features Using word embeddings obtained with unsupervised learning from a large corpus of web articles and tweets results in better NER performance than applying a Twitterspecific text normalizer as shown in Table TABREF26 This is crucial since Turkish text normalization for unstructured data is a challenging task and requires successful morphological analysis whereas extracting word embeddings for any language or domain is much easier yet more effective Although an ideal Turkish NER model for Twitter should be trained on similar informal texts all previous Turkish Twitter NER systems are trained on news data due to the limited amount of annotated Turkish tweets We also experimented training NER models on relatively smaller labeled Twitter data with 10fold crossvalidation Our best phraselevel Fscore of 4661 achieved on TwitterDS1FT is increased to 4896 when trained on the much smaller tweets data TwitterDS2 instead of news data The best Fscores of the previously published Turkish Twitter NER systems BIBREF4 BIBREF12 BIBREF14 as well as our proposed NER system are shown in Table TABREF27 We used the same training set with the first system BIBREF4 in our study but the second NER system BIBREF12 uses a different multilingual news data and the third system BIBREF14 which is rule based does not have a training phase at all All of these previous NER systems use gazetteer lists for named entities which are manually constructed and highly languagedependent whereas our system does not Note that there is no publicly available gazetteer lists in Turkish Kucuk20142 achieved the stateoftheart performance results for Turkish Twitter NER with their best model settings shown in italic These settings are namely using gazetteers list with capitalization feature turned off and with no normalization together by expanding their gazetteer lists of named entities with diacritics variations Our proposed system outperforms the stateoftheart results on both Turkish Twitter data sets even without using gazetteers shown in bold We achieved our best performance results with Turkish word embeddings obtained from our WebTweets corpus when we apply normalization on tweets and keep the capitalization as a feature We adopted a neural networks based semisupervised approach using word embeddings for the NER task on Turkish tweets At the first stage we attained distributed representations of words by employing a fast unsupervised learning method on a large unlabeled corpus At the second stage we exploited these word embeddings together with language independent features in order to train our neural network on labeled data We compared our results on two different Turkish Twitter data sets with the stateoftheart NER systems proposed for Twitter data in Turkish and showed that our system outperforms the stateoftheart results on both data sets Our results also show that using word embeddings from an unlabeled corpus can lead to better performance than applying Twitterspecific text normalization We discussed the promising benefits of using indomain data to learn word embeddings at the unsupervised stage as well Since the only language dependent part of our Turkish Twitter NER system is text normalization and since even without text normalization it outperforms the previous stateoftheart results we believe that our approach can be adapted to other morphologically rich languages Our Turkish Twitter NER system namely TTNER is publicly available We believe that there is still room for improvement for NLP tasks on Turkish social media data As a future work we aim to construct a much larger indomain resource ie unlabeled Turkish tweets corpus and investigate the full benefits of attaining word embeddings from indomain data on Twitter NER This research is partially supported by Boazii University Research Fund Grant Number 11170 We would also like to thank The Scientific and Technological Research Council of Turkey TBTAK The Science Fellowships and Grant Programmes Department BDEB for providing financial support with 2210 National Scholarship Programme for MSc Students </s>
<s>[INST] BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization [/INST] Abstractive summarization aims to shorten a source article or paragraph by rewriting while preserving the main idea Due to the difficulties in rewriting long documents a large body of research on this topic has focused on paragraphlevel article summarization Among them sequencetosequence models have become the mainstream and some have achieved stateoftheart performance BIBREF0 BIBREF1 BIBREF2 In general the only available information for these models during decoding is simply the source article representations from the encoder and the generated words from the previous time steps BIBREF2 BIBREF3 BIBREF4 while the previous words are also generated based on the article representations Since natural language text is complicated and verbose in nature and training data is insufficient in size to help the models distinguish important article information from noise sequencetosequence models tend to deteriorate with the accumulation of word generation eg they generate irrelevant and repeated words frequently BIBREF5 Templatebased summarization BIBREF6 is an effective approach to traditional abstractive summarization in which a number of hard templates are manually created by domain experts and key snippets are then extracted and populated into the templates to form the final summaries The advantage of such approach is it can guarantee concise and coherent summaries in no need of any training data However it is unrealistic to create all the templates manually since this work requires considerable domain knowledge and is also laborintensive Fortunately the summaries of some specific training articles can provide similar guidance to the summarization as hard templates Accordingly these summaries are referred to as soft templates or templates for simplicity in this paper Despite their potential in relieving the verbosity and insufficiency problems of natural language data templates have not been exploited to full advantage For example cao2018retrieve simply concatenated template encoding after the source article in their summarization work To this end we propose a Bidirectional Selective Encoding with Template BiSET model for abstractive sentence summarization Our model involves a novel bidirectional selective layer with two gates to mutually select key information from an article and its template to assist with summary generation Due to the limitations in obtaining handcrafted templates we further propose a multistage process for automatic retrieval of highquality templates from training corpus Extensive experiments were conducted on the Gigaword dataset BIBREF0 a public dataset widely used for abstractive sentence summarization and the results appear to be quite promising Merely using the templates selected by our approach as the final summaries our model can already achieve superior performance to some baseline models demonstrating the effect of our templates This may also indicate the availability of many quality templates in the corpus Secondly the templateequipped summarization model BiSET outperforms all the stateoftheart models significantly To evaluate the importance of the bidirectional selective layer and the two gates we conducted an ablation study by discarding them respectively and the results show that while both of the gates are necessary the templatetoarticle T2A gate tends to be more important than the articletotemplate A2T gate A human evaluation further validates the effectiveness of our model in generating informative concise and readable summaries 10 The contributions of this work include Our framework includes three key modules Retrieve Fast Rerank and BiSET For each source article Retrieve aims to return a few candidate templates from the training corpus Then the Fast Rerank module quickly identifies a best template from the candidates Finally BiSET mutually selects important information from the source article and the template to generate an enhanced article representation for summarization This module starts with a standard information retrieval library to retrieve a small set of candidates for finegrained filtering as cao2018retrieve To do that all nonalphabetic characters eg dates are removed to eliminate their influence on article matching The retrieval process starts by querying the training corpus with a source article to find a few 5 to 30 related articles the summaries of which will be treated as candidate templates The above retrieval process is essentially based on superficial word matching and cannot measure the deep semantic relationship between two articles Therefore the Fast Rerank module is developed to identify a best template from the candidates based on their deep semantic relevance with the source article We regard the candidate with highest relevance as the template As illustrated in Figure FIGREF6 this module consists of a Convolution Encoder Block a Similarity Matrix and a Pooling Layer Convolution Encoder Block This block maps the input article and its candidate templates into highlevel representations The popular ways to this are either by using recurrent neural network RNN or a stack of convolutional neural network CNN while none of them are suitable for our problem This is because a source article is usually much longer than a template and both RNN and CNN may lead to semantic irrelevance after encodings Instead we implement a new convolution encoder block which includes a word embedding layer a 1D convolution followed by a nonlinearity function and residual connections BIBREF7 Formally given word embeddings INLINEFORM0 of an article we use a 1D convolution with kernel INLINEFORM1 and bias INLINEFORM2 to extract the ngram features DISPLAYFORM0 where INLINEFORM0 We pad both sides of an articletemplate with zeros to keep fixed length After that we employ the gated linear unit GLU BIBREF8 as our activation function to control the proportion of information to pass through GLU takes half the dimension of INLINEFORM1 as input and reduces the input dimension to INLINEFORM2 Let INLINEFORM3 where INLINEFORM4 we have DISPLAYFORM0 where INLINEFORM0 INLINEFORM1 is the sigmoid function and INLINEFORM2 means elementwise multiplication To retain the original information we add residual connections from the input of the convolution layer to the output of this block INLINEFORM3 Similarity Matrix The above encoder block generates a highlevel representation for each source articlecandidate template Then a similarity matrix INLINEFORM0 is calculated for a given article representation INLINEFORM1 and a template representation INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 is the similarity function and the common options for INLINEFORM1 include DISPLAYFORM0 Most previous work uses dot product or bilinear function BIBREF9 for the similarity yet we find the family of Euclidean distance perform much better for our task Therefore we define the similarity function as DISPLAYFORM0 Pooling Layer This layer is intended to filter out unnecessary information in the matrix INLINEFORM0 Before applying such pooling operations as maxpooling and kmax pooling BIBREF10 over the similarity matrix we note there are repeated words in the source article which we only want to count once For this reason we first identify some salient weights from INLINEFORM1 DISPLAYFORM0 where INLINEFORM0 is a columnwise maximum function We then apply kmax pooling over INLINEFORM1 to select INLINEFORM2 most important weights INLINEFORM3 Finally we apply a twolayer feedforward network to output a similarity score for the source article and the candidate template DISPLAYFORM0 As mentioned before the role of Fast Rerank is to rerank the initial search results and return a best template for summarization To examine the effect of this module we studied its ranking quality under different ranges as in Section SECREF38 The original rankings by Retrieve are presented for comparison with the NDCG metric We regard the ROUGE2 score of each candidate template with the reference summary as the ground truth As shown in Figure FIGREF42 Fast Rerank consistently provides enhanced rankings over the original In this section we explore three traditional approaches to taking advantage of the templates for summarization They share the same encoder and decoder layers but own different interaction layers for combination of a source article and template The encoder layer uses a standard bidirectional RNN BiRNN to separately encode the source article and the template into hidden states INLINEFORM0 and INLINEFORM1 Concatenation This approach directly concatenates the hidden state INLINEFORM0 of a template after the article representation INLINEFORM1 to form a new article representation INLINEFORM2 This approach is similar to INLINEFORM3 BIBREF11 but uses our Fast Rerank and summary generation modules ConcatenationSelfAttention This approach adds a multihead selfattention BIBREF12 layer with 4 heads on the basis of the above direct concatenation DCN Attention Initially introduced for machine reading comprehension BIBREF13 this interaction approach is employed here to create templateaware article representations First we compute a similarity matrix INLINEFORM0 for each pair of article and template words by INLINEFORM1 where is the concatenation operation We then normalize each row and column of INLINEFORM2 by softmax giving rise to two new matrices INLINEFORM3 and INLINEFORM4 After that the Dynamic Coattention Network DCN attention is applied to compute the bidirectional attention INLINEFORM5 and INLINEFORM6 where INLINEFORM7 denotes articletotemplate attention and INLINEFORM8 is templatetoarticle attention Finally we obtain the templateaware article representation INLINEFORM9 DISPLAYFORM0 Inspired by the research in machine reading comprehension BIBREF13 and selective mechanism BIBREF14 we propose a novel Bidirectional Selective Encoding with Template BiSET model for abstractive sentence summarization The core idea behind BiSET is to involve templates to assist with article representation and summary generation As shown in Figure FIGREF17 BiSET contains two selective gates TemplatetoArticle T2A gate and ArticletoTemplate A2T gate The role of T2A is to use a template to filter the source article representation DISPLAYFORM0 where INLINEFORM0 is the concatenation of the last forward hidden state INLINEFORM1 and the first backward hidden state INLINEFORM2 of the template On the other hand the purpose of A2T is to control the proportion of INLINEFORM0 in the final article representation We assume the source article is credible and use its representation INLINEFORM1 together with INLINEFORM2 to calculate a confidence degree where INLINEFORM3 is obtained in a similar way as INLINEFORM4 The confidence degree INLINEFORM5 is computed by DISPLAYFORM0 The final source article representation is calculated as the weighted sum of INLINEFORM0 and INLINEFORM1 DISPLAYFORM0 which allows a flexible manner for template incorporation and helps to resist errors when lowquality templates are given The decoder layer This layer includes an ordinary RNN decoder BIBREF15 At each time step INLINEFORM0 the decoder reads the word INLINEFORM1 and hidden state INLINEFORM2 generated in the previous step and gives a new hidden state for the current step DISPLAYFORM0 where the hidden state is initialized with the original source article representation INLINEFORM0 We then compute the attention between INLINEFORM1 and the final article representation INLINEFORM2 to obtain a context vector INLINEFORM3 DISPLAYFORM0 After that a simple concatenation layer is used to combine the hidden state INLINEFORM0 and the context vector INLINEFORM1 into a new hidden state INLINEFORM2 DISPLAYFORM0 which will be mapped to a new representation of vocabulary size and fed through a softmax layer to output the target word distribution DISPLAYFORM0 The overall performance of all the studied models is shown in Table TABREF46 The results show that our model significantly outperforms all the baseline models and sets a new state of the art for abstractive sentence summarization To evaluate the impact of templates on our model we also implemented BiSET with two other types of templates randomlyselected templates and best templates identified by Fast Rank under different ranges As shown in Table TABREF47 the performance of our model improves constantly with the improvement of template quality larger ranges lead to better chances for good templates Even with randomlyselected templates our model still works with stable performance demonstrating its robustness The Retrieve module involves an unsupervised process with traditional indexing and retrieval techniques For Fast Rerank since there is no ground truth available we use ROUGE1 BIBREF16 to evaluate the saliency of a candidate template with respect to the gold summary of current source article Therefore the loss function is defined as DISPLAYFORM0 where INLINEFORM0 is a score predicted by Equation EQREF16 and INLINEFORM1 is the product of the training set size INLINEFORM2 and the number of retrieved templates for each article For the BiSET module the loss function is chosen as the negative loglikelihood between the generated summary INLINEFORM0 and the true summary INLINEFORM1 DISPLAYFORM0 where INLINEFORM0 is the length of the true summary INLINEFORM1 contains all the trainable variables and INLINEFORM2 and INLINEFORM3 denote the source article and the template respectively In this section we introduce our evaluations on a standard dataset The dataset used for evaluation is Annotated English Gigaword BIBREF17 a parallel corpus formed by pairing the first sentence of an article with its headline For a fair comparison we use the version preprocessed by Rush2015A as previous work During training both the Fast Rerank and BiSET modules have a batch size of 64 with the Adam optimizer BIBREF18 We also apply grad clipping BIBREF19 with a range of 55 The differences of the two modules in settings are listed below Fast Rerank We set the size of word embeddings to 300 the convolution encoder block number to 1 and the kernel size of CNN to 3 The weights are shared between the article and template encoders The INLINEFORM0 of kmax pooling is set to 10 L2 weight decay with INLINEFORM1 is performed over all trainable variables The initial learning rate is 0001 and multiplied by 01 every 10K steps Dropout between layers is applied BiSET A twolayer BiLSTM is used as the encoder and another twolayer LSTM as the decoder The sizes of word embeddings and LSTM hidden states are both set to 500 We only apply dropout in the LSTM stack with a rate of 03 The learning rate is set to 0001 for the first 50K steps and halved every 10K steps Beam search with size 5 is applied to search for optimal answers Following previous work BIBREF2 BIBREF14 BIBREF11 we use the standard F1 scores of ROUGE1 ROUGE2 and ROUGEL BIBREF16 to evaluate the selected templates and generated summaries where the official ROUGE script is applied We employ the normalized discounted cumulative gain NDCG BIBREF20 from information retrieval to evaluate the Fast Rerank module In this section we report our experimental results with thorough analysis and discussions The Retrieve module is intended to narrow down the search range for a best template We evaluated this module by considering three types of templates a Random means a randomly selected summary from the training corpus b Retrievetop is the highestranked summary by Retrieve c NOptimal means among the INLINEFORM0 top search results the template is specified as the summary with largest ROUGE score with gold summary As the results show in Table TABREF40 randomly selected templates are totally irrelevant and unhelpful When they are replaced by the Retrievetop templates the results improve apparently demonstrating the relatedness of topranked summaries to gold summaries Furthermore when the NOptimal templates are used additional improvements can be observed as INLINEFORM0 grows This trend is also confirmed by Figure FIGREF39 in which the ROUGE scores increase before 30 and stabilize afterwards These results suggest that the ranges given by Retrieve indeed help to find quality templates In Section SECREF20 we also explored three alternative approaches to integrating an article with its template The results are shown in Table TABREF44 from which we can note that none of these approaches help yield satisfactory performance Even though DCN Attention works impressively in machine reading comprehension it performs even worse in this task than the simple concatenation We conjecture the reason is that the DCN Attention attempts to fuse the template information into an article as in machine reading comprehension rather than selects key information from the two to form an enhanced article representation Our model is designed for both accuracy and efficiency Due to the parallelizable nature of CNN the Fast Rerank module only takes about 30 minutes for training and 3 seconds for inference on the whole test set The BiSET model takes about 8 hours for training GPUGTX 1080 6 times faster than INLINEFORM0 BIBREF11 The purpose of this study is to examine the roles of the bidirectional selective layer and its two gates Firstly we removed the selective layer and replaced it with the direct concatenation of an article with its template representation As the results show in Table TABREF51 the model performs even worse than some ordinary sequencetosequence models in Table TABREF46 The reason might be that templates would overwhelm the original article representations and become noise after concatenation Then we removed the TemplatetoArticle T2A gate and as a result the model shows a great decline in performance indicating the importance of templates in article representations Finally when we removed the ArticletoTemplate A2T gate whose role is to control the weight of T2A in article representations only a small performance decline is observed This may suggest that the T2A gate alone can already capture most of the important article information while A2T plays some supplemental role We then carried out a human evaluation to evaluate the generated summaries from another perspective Our evaluators include 8 graduate students and 4 senior undergraduates while the dataset is 100 randomlyselected articles from the test set Each sample in this dataset also includes 1 reference summary 5 summaries generated by OpenNMT BIBREF21 INLINEFORM0 BIBREF11 and BiSET under three settings respectively and 3 randomlyselected summaries for trapping We asked the evaluators to independently rate each summary on a scale of 1 to 5 with respect to its quality in informativity conciseness and readability While collecting the results we rejected the samples in which more than half evaluators rate the informativity of the reference summary below 3 We also rejected the samples in which the informativity of a randomlyselected summary is scored higher than 3 Finally we obtained 43 remaining samples and calculated an average score for each aspect As the results show in Table TABREF55 our model not only performs much better than the baselines it also shows quite comparable performance with the reference summaries In Table TABREF56 we present two real examples which show the templates found by our model are indeed related to the source articles and with their aid our model succeeds to keep the main content of the source articles for summarization while discarding unrelated words like US and Olympic Games Abstractive sentence summarization a task analogous to headline generation or sentence compression aims to generate a brief summary given a short source article Early studies in this problem mainly focus on statistical or linguisticrulebased methods including those based on extractive and compression BIBREF23 BIBREF24 BIBREF25 templates BIBREF6 and statistical machine translation BIBREF26 The advent of largescale summarization corpora accelerates the development of various neural network methods Rush2015A first applied an attentionbased sequencetosequence model for abstractive summarization which includes a convolutional neural network CNN encoder and a feedforward network decoder Chopra2016Abstractive replaced the decoder with a recurrent neural network RNN Nallapati2016Abstractive further changed the sequencetosequence model to a fully RNNbased model Besides Gu2016Incorporating found that this task benefits from copying words from the source articles and proposed the CopyNet correspondingly With a similar purpose Gulcehre2016Pointing proposed to use a switch gate to control when to copy from the source article and when to generate from the vocabulary Zhou2017Selective employed a selective gate to filter out unimportant information when encoding Some other work attempts to incorporate external knowledge for abstractive summarization For example Nallapati2016Abstractive proposed to enrich their encoder with handcrafted features such as named entities and partofspeech POS tags guu2018generating also attempted to encode humanwritten sentences to improve neural text generation Similar to our work cao2018retrieve proposed to retrieve a related summary from the training set as soft template to assist with the summarization However their approach tends to oversimplify the role of the template by directly concatenating a template after the source article encoding In contrast our bidirectional selective mechanism exhibits a novel attempt to selecting key information from the article and the template in a mutual manner offering greater flexibility in using the template In this paper we presented a novel Bidirectional Selective Encoding with Template BiSET model for abstractive sentence summarization To counteract the verbosity and insufficiency of training data we proposed to retrieve highquality existing summaries as templates to assist with source article representations through an ingenious bidirectional selective layer The enhanced article representations are expected to contribute towards better summarization eventually We also developed the corresponding retrieval and reranking modules for obtaining quality templates Extensive evaluations were conducted on a standard benchmark dataset and experimental results show that our model can quickly pick out highquality templates from the training corpus laying key foundation for effective article representations and summary generations The results also show that our model outperforms all the baseline models and sets a new state of the art An ablation study validates the role of the bidirectional selective layer and a human evaluation further proves that our model can generate informative concise and readable summaries The paper was partially supported by the Program for Guangdong Introducing Innovative and Enterpreneurial Teams No2017ZT07X355 and the Key R INLINEFORM0 D Program of Guangdong Province 2019B010120001 </s>
<s>[INST] Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency Task [/INST] The Transformer network BIBREF3 is a neural sequencetosequence model that has achieved stateoftheart results in machine translation However Transformer models tend to be very large typically consisting of hundreds of millions of parameters As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference using Transformer networks may be prohibitively expensive in scenarios with constrained resources For the 2019 Workshop on Neural Generation of Text WNGT Efficiency shared task BIBREF0 the Notre Dame Natural Language Processing NDNLP group looked at a method of inducing sparsity in parameters called autosizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance Autosizing first introduced by BIBREF1 uses group regularizers to encourage parameter sparsity When applied over neurons it can delete neurons in a network and shrink the total number of parameters A nice advantage of autosizing is that it is independent of model architecture although we apply it to the Transformer network in this task it can easily be applied to any other neural architecture NDNLPs submission to the 2019 WNGT Efficiency shared task uses a standard recommended baseline Transformer network Following BIBREF2 we investigate the application of autosizing to various portions of the network Differing from their work the shared task used a significantly larger training dataset from WMT 2014 BIBREF4 as well as the goal of reducing model size even if it impacted translation performance Our best system was able to prune over 25 of the parameters yet had a BLEU drop of only 11 points This translates to over 25 million parameters pruned and saves almost 100 megabytes of disk space to store the model Autosizing is a method that encourages sparsity through use of a group regularizer Whereas the most common applications of regularization will act over parameters individually a group regularizer works over groupings of parameters For instance applying a sparsity inducing regularizer to a twodimensional parameter tensor will encourage individual values to be driven to 00 A sparsityinducing group regularizer will act over defined substructures such as entire rows or columns driving the entire groups to zero Depending on model specifications one row or column of a tensor in a neural network can correspond to one neuron in the model Following the discussion of BIBREF1 and BIBREF2 autosizing works by training a neural network while using a regularizer to prune units from the network minimizing W are the parameters of the model and R is a regularizer Here as with the previous work we experiment with two regularizers The optimization is done using proximal gradient descent BIBREF5 which alternates between stochastic gradient descent steps and proximal steps The Transformer network BIBREF3 is a sequencetosequence model in which both the encoder and the decoder consist of stacked selfattention layers The multihead attention uses two affine transformations followed by a softmax layer Each layer has a positionwise feedforward neural network FFN with a hidden layer of rectified linear units Both the multihead attention and the feedforward neural network have residual connections that allow information to bypass those layers In addition there are also word and position embeddings Figure FIGREF1 taken from the original paper shows the architecture NDNLPs submission focuses on the N stacked encoder and decoder layers The Transformer has demonstrated remarkable success on a variety of datasets but it is highly overparameterized For example the baseline Transformer model has more than 98 million parameters but the English portion of the training data in this shared task has only 116 million tokens and 816 thousand types Early NMT models such as BIBREF6 have most of their parameters in the embedding layers but the transformer has a larger percentage of the model in the actual encoder and decoder layers Though the group regularizers of autosizing can be applied to any parameter matrix here we focus on the parameter matrices within the encoder and decoder layers We note that there has been some work recently on shrinking networks through pruning However these differ from autosizing as they frequently require an arbitrary threshold and are not included during the training process For instance BIBREF7 prunes networks based off a variety of thresholds and then retrains a model BIBREF8 also look at pruning but of attention heads specifically They do this through a relaxation of an ell 0 regularizer in order to make it differentiable This allows them to not need to use a proximal step This method too starts with pretrained model and then continues training BIBREF9 also look at pruning attention heads in the transformer However they too use thresholding but only apply it at test time Autosizing does not require a thresholding value nor does it require a pretrained model Of particular interest are the large positionwise feedforward networks in each encoder and decoder layer W1 and W2 are two large affine transformations that take inputs from D dimensions to 4D then project them back to D again These layers make use of rectified linear unit activations which were the focus of autosizing in the work of BIBREF1 No theory or intuition is given as to why this value of 4D should be used Following BIBREF2 we apply the autosizing method to the Transformer network focusing on the two largest components the feedforward layers and the multihead attentions blue and orange rectangles in Figure FIGREF1 Remember that since there are residual connections allowing information to bypass the layers we are autosizing information can still flow through the network even if the regularizer drives all the neurons in a layer to zero effectively pruning out an entire layer All of our models are trained using the fairseq implementation of the Transformer BIBREF10 For the regularizers used in autosizing we make use of an opensource proximal gradient toolkit implemented in PyTorch BIBREF2 For each minibatch update the stochastic gradient descent step is handled with a standard PyTorch forwardbackward call Then the proximal step is applied to parameter matrices We used the originally proposed transformer architecture with six encoder and six decoder layers Our model dimension was 512 and we used 8 attention heads The feedforward network subcomponents were of size 2048 All of our systems were run using subword units BPE with 32000 merge operations on concatenated source and target training data BIBREF11 We clip norms at 01 use label smoothed crossentropy with value 01 and an early stopping criterion when the learning rate is smaller than 105 We used the Adam optimizer BIBREF12 a learning rate of 104 and dropout of 01 Following recommendations in the fairseq and tensor2tensor BIBREF13 code bases we apply layer normalization before a subcomponent as opposed to after At test time we decoded using a beam of 5 with length normalization BIBREF14 and evaluate using casesensitive tokenized BLEU BIBREF15 For the autosizing experiments we looked at both ell 21 and ell infty 1 regularizers We experimented over a range of regularizer coefficient strengths lambda that control how large the proximal gradient step will be Similar to BIBREF1 but differing from BIBREF16 we use one value of lambda for all parameter matrices in the network We note that different regularization coefficient values are suited for different types or regularizers Additionally all of our experiments use the same batch size which is also related to lambda We applied autosizing to the subcomponents of the encoder and decoder layers without touching the word or positional embeddings Recall from Figure FIGREF1 that each layer has multihead attention and feedforward network subcomponents In turn each multihead attention subcomponent is comprised of two parameter matrices Similarly each feedforward network has two parameter matrices W1 and W2 We looked at three main experimental configurations All Autosizing is applied to every multihead attention and feedforward network subcomponent in every layer of the encoder and decoder Encoder As with All autosizing is applied to both multihead attention and feedforward network subcomponents but only in the encoder layers The decoder remains the same FFN Autosizing applied only to the feedforward network subcomponents W1 and W2 but not to the multihead portions This too is applied to both the encoder and decoder Our results are presented in Table TABREF6 The baseline system has 982 million parameters and a BLEU score of 279 on newstest2015 It takes up 375 megabytes on disk Our systems that applied autosizing only to the feedforward network subcomponents of the transformer network maintained the best BLEU scores while also pruning out the most parameters of the model Overall our best system used ell 2110 regularization for autosizing and left 731 million parameters remaining On disk the model takes 279 megabytes to store roughly 100 megabytes less than the baseline The performance drop compared to the baseline is 11 BLEU points but the model is over 25 smaller Applying autosizing to the multihead attention and feedforward network subcomponents of only the encoder also pruned a substantial amount of parameters Though this too resulted in a smaller model on disk the BLEU scores were worse than autosizing just the feedforward subcomponents Autosizing the multihead attention and feedforward network subcomponents of both the encoder and decoder actually resulted in a larger model than the encoder only but with a lower BLEU score Overall our results suggest that the attention portion of the transformer network is more important for model performance than the feedforward networks in each layer In this paper we have investigated the impact of using autosizing on the transformer network of the 2019 WNGT efficiency task We were able to delete more than 25 of the parameters in the model while only suffering a modest BLEU drop In particular focusing on the parameter matrices of the feedforward networks in every layer of the encoder and decoder yielded the smallest models that still performed well A nice aspect of our proposed method is that the proximal gradient step of autosizing can be applied to a wide variety of parameter matrices Whereas for the transformer the largest impact was on feedforward networks within a layer should a new architecture emerge in the future autosizing can be easily adapted to the trainable parameters Overall NDNLPs submission has shown that autosizing is a flexible framework for pruning parameters in a large NMT system With an aggressive regularization scheme large portions of the model can be deleted with only a modest impact on BLEU scores This in turn yields a much smaller model on disk and at runtime This research was supported in part by University of Southern California subcontract 67108176 under DARPA contract HR001115C0115 </s>
<s>[INST] Neural Network Translation Models for Grammatical Error Correction [/INST] Grammatical error correction GEC is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context Most of the grammatical error correction systems use classification and rulebased approaches for correcting specific error types However these systems use several linguistic cues as features The standard linguistic analysis tools like partofspeech POS taggers and parsers are often trained on wellformed text and perform poorly on ungrammatical text This introduces further errors and limits the performance of rulebased and classification approaches to GEC As a consequence the phrasebased statistical machine translation SMT approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from errorcorrected parallel corpora without any additional linguistic information They are also not limited to specific error types Currently many stateoftheart GEC systems are based on SMT or use SMT components for error correction BIBREF0 BIBREF1 BIBREF2 In this paper grammatical error correction includes correcting errors of all types including word choice errors and collocation errors which constitute a large class of learners errors We model our GEC system based on the phrasebased SMT approach However traditional phrasebased SMT systems treat words and phrases as discrete entities We take advantage of continuous space representation by adding two neural network components that have been shown to improve SMT systems BIBREF3 BIBREF4 These neural networks are able to capture nonlinear relationships between source and target sentences and can encode contextual information more effectively Our experiments show that the addition of these two neural networks leads to significant improvements over a strong baseline and outperforms the current state of the art In the past decade there has been increasing attention on grammatical error correction in English mainly due to the growing number of English as Second Language ESL learners around the world The popularity of this problem in natural language processing research grew further through Helping Our Own HOO and the CoNLL shared tasks BIBREF5 BIBREF6 BIBREF7 BIBREF8 Most published work in GEC aimed at building specific classifiers for different error types and then use them to build hybrid systems BIBREF9 BIBREF10 One of the first approaches of using SMT for GEC focused on correction of countability errors of mass nouns eg many informations INLINEFORM0 much information BIBREF11 They had to use an artificially constructed parallel corpus for training their SMT system Later the availability of largescale error corrected data BIBREF12 further improved SMTbased GEC systems Recently continuous space representations of words and phrases have been incorporated into SMT systems via neural networks Specifically addition of monolingual neural network language models BIBREF13 BIBREF14 neural network joint models NNJM BIBREF4 and neural network global lexicon models NNGLM BIBREF3 have been shown to be useful for SMT Neural networks have been previously used for GEC as a language model feature in the classification approach BIBREF15 and as a classifier for article error correction BIBREF16 Recently a neural machine translation approach has been proposed for GEC BIBREF17 This method uses a recurrent neural network to perform sequencetosequence mapping from erroneous to wellformed sentences Additionally it relies on a postprocessing step based on statistical wordbased translation models to replace outofvocabulary words In this paper we investigate the effectiveness of two neural network models NNGLM and NNJM in SMTbased GEC To the best of our knowledge there is no prior work that uses these two neural network models for SMTbased GEC In this paper the task of grammatical error correction is formulated as a translation task from the language of bad English to the language of good English That is the source sentence is written by a second language learner and potentially contains grammatical errors whereas the target sentence is the corrected fluent sentence We use a phrasebased machine translation framework BIBREF18 for translation which employs a loglinear model to find the best translation INLINEFORM0 given a source sentence INLINEFORM1 The best translation is selected according to the following equation INLINEFORM2 where INLINEFORM0 is the number of features INLINEFORM1 and INLINEFORM2 are the INLINEFORM3 th feature function and feature weight respectively We make use of the standard features used in phrasebased translation without any reordering leading to monotone translations The features can be broadly categorized as translation model and language model features The translation model in the phrasebased machine translation framework is trained using parallel data ie sentencealigned erroneous source text and corrected target text The translation model is responsible for finding the best transformation of the source sentence to produce the corrected sentence On the other hand the language model is trained on wellformed English text and this ensures the fluency of the corrected text To find the optimal feature weights INLINEFORM4 we use minimum error rate training MERT maximizing the INLINEFORM5 measure on the development set BIBREF2 The INLINEFORM6 measure BIBREF19 which weights precision twice as much as recall is the evaluation metric widely used for GEC and was the official evaluation metric adopted in the CoNLL 2014 shared task BIBREF8 Additionally we augment the feature set by adding two neural network translation models namely a neural network global lexicon model BIBREF3 and a neural network joint model BIBREF4 These models are described in detail in Sections SECREF4 and SECREF5 A global lexicon model is used to predict the presence of words in the corrected output The model estimates the overall probability of a target hypothesis ie a candidate corrected sentence given the source sentence by making use of the probability computed for each word in the hypothesis The individual word probabilities can be computed by training density estimation models such as maximum entropy BIBREF20 or probabilistic neural networks BIBREF3 Following BIBREF3 we formulate our global lexicon model using a feedforward neural network The model and the training algorithm are described below The probability of a target hypothesis is computed using the following equation DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the source sentence and the target hypothesis respectively and INLINEFORM2 denotes the number of words in the target hypothesis INLINEFORM3 is the probability of the target word INLINEFORM4 given the source sentence INLINEFORM5 INLINEFORM6 is the output of the neural network The architecture of the neural network is shown in Figure FIGREF3 INLINEFORM7 is calculated by where INLINEFORM0 is the hidden layer output and INLINEFORM1 and INLINEFORM2 are the output layer weights and biases respectively INLINEFORM3 is the elementwise sigmoid function which scales the output to INLINEFORM4 INLINEFORM0 is computed by the following equation INLINEFORM1 where INLINEFORM0 is the activation function and INLINEFORM1 and INLINEFORM2 are the hidden layer weights and biases applied on a binary bagofwords representation of the input sentence denoted by INLINEFORM3 The size of INLINEFORM4 is equal to the size of the source vocabulary INLINEFORM5 and each element indicates the presence or absence denoted by 1 or 0 respectively of a given source word The probability of the target hypothesis INLINEFORM0 given the source sentence INLINEFORM1 is estimated by the following equation DISPLAYFORM0 where INLINEFORM0 is the number of words in the target sentence INLINEFORM1 is the INLINEFORM2 th target word and INLINEFORM3 is the context history for the target word INLINEFORM4 The context INLINEFORM5 consists of a set of INLINEFORM6 source words represented by INLINEFORM7 and INLINEFORM8 words preceding INLINEFORM9 from the target sentence represented by INLINEFORM10 The context words from the source side are the words in the window of size INLINEFORM11 surrounding the source word INLINEFORM12 that is aligned to the target word INLINEFORM13 The output of the neural network INLINEFORM14 is the output of the final softmax layer which is given by the following equation DISPLAYFORM0 where INLINEFORM0 is the output of the neural network before applying softmax and INLINEFORM1 is given by following equation INLINEFORM2 The output of the neural network before softmax is computed by applying output layer weights INLINEFORM0 and biases INLINEFORM1 to the hidden layer output INLINEFORM2 INLINEFORM3 INLINEFORM0 is computed by applying weights INLINEFORM1 and biases INLINEFORM2 on the hidden layer input INLINEFORM3 and using a nonlinear activation function INLINEFORM4 INLINEFORM5 The input to the hidden layer INLINEFORM0 is a concatenated vector of context word embeddings INLINEFORM1 where INLINEFORM0 and INLINEFORM1 are the onehot representations of the source word INLINEFORM2 and the target word INLINEFORM3 respectively Similarly INLINEFORM4 and INLINEFORM5 are the word embeddings matrices for the source words and the target words As we use log probabilities instead of raw probabilities in our GEC system Equation EQREF11 can be rewritten as the following DISPLAYFORM0 Finally since the network is trained by Noise Contrastive Estimation NCE described in Section SECREF14 it becomes selfnormalized This means that INLINEFORM0 will be approximately 1 and hence the raw output of the neural network INLINEFORM1 can be directly used as the log probabilities during decoding The model is trained using minibatch gradient descent with backpropagation We use binary cross entropy Equation EQREF5 as the cost function DISPLAYFORM0 where INLINEFORM0 refers to the binary bagofwords representation of the reference target sentence and INLINEFORM1 is the target vocabulary Each minibatch is composed of a fixed number of sentence pairs INLINEFORM2 The training algorithm repeatedly minimizes the cost function calculated for a given minibatch by updating the parameters according to the gradients To avoid the costly softmax layer and thereby speed up both training and decoding we use Noise Contrastive Estimation NCE following BIBREF14 During training the negative log likelihood cost function is modified to a probabilistic binary classifier which learns to discriminate between the actual target word and INLINEFORM0 random words noisy samples per training instance selected from a noise distribution INLINEFORM1 The two classes are INLINEFORM2 indicating that the word is the target word and INLINEFORM3 indicating that the word is a noisy sample The conditional probabilities for INLINEFORM4 and INLINEFORM5 given a target word and context is given by INLINEFORM6 INLINEFORM7 where INLINEFORM0 is the model probability given in Equation EQREF12 The negative log likelihood cost function is replaced by the following function INLINEFORM1 where INLINEFORM0 refers to the INLINEFORM1 th noise sample for the target word INLINEFORM2 INLINEFORM3 is required for the computation of the neural network output INLINEFORM4 However setting the term INLINEFORM5 to 1 during training forces the output of the neural network to be selfnormalized Hence Equation EQREF13 reduces to DISPLAYFORM0 Using Equation EQREF15 avoids the expensive softmax computation in the final layer and consequently speeds up decoding Since the prior probability of observing a particular word in a sentence is usually a small number the probabilistic output of NNGLM can be biased towards zero This bias can hurt the performance of our system and therefore we try to alleviate this problem by rescaling the output after training NNGLM Our solution is to map the output probabilities to a new probability space by fitting a logistic function on the output Formally we use Equation EQREF7 as the mapping function DISPLAYFORM0 where INLINEFORM0 is the rescaled probability and INLINEFORM1 and INLINEFORM2 are the parameters For each sentence pair INLINEFORM3 in the development set we collect training instances of the form INLINEFORM4 for every word INLINEFORM5 in the target vocabulary where INLINEFORM6 and INLINEFORM7 The value of INLINEFORM8 is set according to the presence INLINEFORM9 or absence INLINEFORM10 of the word INLINEFORM11 in the target sentence INLINEFORM12 We use weighted cross entropy loss function with INLINEFORM13 regularization to train INLINEFORM14 and INLINEFORM15 on the development set INLINEFORM16 Here INLINEFORM0 is the number of training samples INLINEFORM1 is the probability of INLINEFORM2 computed by INLINEFORM3 and INLINEFORM4 and INLINEFORM5 are the weights assigned to the two classes INLINEFORM6 and INLINEFORM7 respectively In order to balance the two classes we weight each class inversely proportional to class frequencies in the training data Equation EQREF8 to put more weight on the less frequent class DISPLAYFORM0 In Equation EQREF8 INLINEFORM0 and INLINEFORM1 are the number of samples in each class After training the rescaling model we use INLINEFORM2 and INLINEFORM3 to calculate INLINEFORM4 according to Equation EQREF7 Finally we use INLINEFORM5 instead of INLINEFORM6 in Equation EQREF2 Joint models in translation augment the context information in language models with words from the source sentence A neural network joint model NNJM BIBREF4 uses a neural network to model the word probabilities given a context composed of source and target words NNJM can scale up to large order of ngrams and still perform well because of its ability to capture semantic information through continuous space representations of words and to learn nonlinear relationship between source and target words Unlike the global lexicon model NNJM uses a fixed window from the source side and take sequence information of words into consideration in order to estimate the probability of the target word The model and the training method are described below We describe our experimental setup including the description of the data we used the configuration of our baseline system and the neural network components and the evaluation method in Section SECREF16 followed by the results and discussion in Section SECREF23 We use the popular phrasebased machine translation toolkit Moses as our baseline SMT system NUCLE BIBREF21 which is the official training data for the CoNLL 2013 and 2014 shared tasks is used as the parallel text for training Additionally we obtain parallel corpora from Lang8 Corpus of Learner English v10 BIBREF12 which consists of texts written by ESL English as Second Language learners on the language learning platform Lang8 We use the test data for the CoNLL 2013 shared task as our development data The statistics of the training and development data are given in Table TABREF19 Source side refers to the original text written by the ESL learners and target side refers to the corresponding corrected text handcorrected by humans The source side and the target side are sentencealigned and tokenized We train the translation model for our SMT system using a concatenation of NUCLE and Lang8 v10 parallel data The training data is cleaned up by removing sentence pairs in which either the source or the target sentence is empty or is too long greater than 80 tokens or violate a 91 sentence ratio limit The translation model uses the default features in Moses which include the forward and inverse phrase translation probabilities forward and inverse lexical weights word penalty and phrase penalty We compute the phrase alignments using standard tools in Moses We use two language model features a 5gram language model trained using the target side of NUCLE used for training the translation model and a 5gram language model trained using English Wikipedia INLINEFORM0 178 billion tokens Both language models are estimated with KenLM using modified KneserNey smoothing We use MERT for tuning the feature weights by optimizing the INLINEFORM1 measure which weights precision twice as much as recall This system constitutes our baseline system in Table TABREF24 Our baseline system uses exactly the same training data as BIBREF0 for training the translation model and the language model The difference between our baseline system and the SMT components of BIBREF0 is that we tune with INLINEFORM2 instead of BLEU and we use the standard Moses configuration without the Levenshtein distance feature On top of our baseline system described above we incorporate the two neural network components neural network global lexicon model NNGLM and neural network joint model NNJM as features Both NNGLM and NNJM are trained using the parallel data used to train the translation model of our baseline system We implement NNGLM using the Theano library in Python in order to make use of parallelization with GPUs thus speeding up training significantly We use a source and target vocabulary of 10000 most frequent words on both sides We use a single hidden layer neural network with 2000 hidden nodes We use INLINEFORM0 as the activation function for the hidden layer We optimize the model weights by stochastic gradient descent using a minibatch size of 100 and a learning rate of 10 We train the model for 45 epochs The logistic regression function for rescaling is trained using the probabilities obtained from this model on the development set To speed up tuning and decoding we precompute the probabilities of target words using the source side sentences of the development and the test sets respectively We implement a feature function in Moses to compute the probability of a target hypothesis given the source sentence using the precomputed probabilities To train NNJM we use the publicly available implementation Neural Probabilistic Language Model NPLM BIBREF14 The latest version of Moses can incorporate NNJM trained using NPLM as a feature while decoding Similar to NNGLM we use the parallel text used for training the translation model in order to train NNJM We use a source context window size of 5 and a target context window size of 4 We select a source context vocabulary of 16000 most frequent words from the source side The target context vocabulary and output vocabulary is set to the 32000 most frequent words We use a single hidden layer to speed up training and decoding with an input embedding dimension of 192 and 512 hidden layer nodes We use rectified linear units ReLU as the activation function We train NNJM with noise contrastive estimation with 100 noise samples per training instance which are obtained from a unigram distribution The neural network is trained for 30 epochs using stochastic gradient descent optimization with a minibatch size of 128 and learning rate of 01 We conduct experiments by incorporating NNGLM and NNJM both independently and jointly into our baseline system The results of our experiments are described in Section SECREF23 The evaluation is performed similar to the CoNLL 2014 shared task setting using the the official test data of the CoNLL 2014 shared task with annotations from two annotators without considering alternative annotations suggested by the participating teams The test dataset consists of 1312 errorannotated sentences with 30144 tokens on the source side We make use of the official scorer for the shared task M INLINEFORM0 Scorer v32 BIBREF19 for evaluation We perform statistical significance test using onetailed sign test with bootstrap resampling on 100 samples Table TABREF24 presents the results of our experiments with neural network global lexicon model NNGLM and neural network joint model NNJM We see that the addition of both NNGLM and NNJM to our baseline individually improves INLINEFORM0 measure on the CoNLL 2014 test set by 043 and 080 respectively Although both improvements over the baseline are statistically significant with INLINEFORM1 we observe that the improvement of NNGLM is slightly lower than that of NNJM NNGLM encodes the entire lexical information from the source sentence without word ordering information Hence it focuses mostly on the choice of words appearing in the output Many of the words in the source context may not be necessary for ensuring the quality of corrected output On the other hand NNJM looks at a smaller window of words in the source side NNJM can act as a language model and can ensure a fluent translation output compared to NNGLM We also found rescaling to be important for NNGLM because of imbalanced training data While the most frequent words in the data I and to appear in 43 and 27 of the training sentences respectively most words occur in very few sentences only For example the word set appears in 015 of the sentences and the word enterprise appears in 0003 of the sentences By incorporating both components together we obtain an improvement of 117 in terms of INLINEFORM0 measure This indicates that both components are beneficial and complement each other to improve the performance of the baseline system While NNGLM looks at the entire source sentence and ensures the appropriate choice of words to appear in the output sentence NNJM encourages the system to choose appropriate corrections that give a fluent output We compare our system to the top 3 systems in the CoNLL 2014 shared task and to the best published results BIBREF17 BIBREF0 on the test data of the CoNLL 2014 shared task The results are summarized in Table TABREF26 Our final system including both neural network models outperforms the best system BIBREF17 by 185 in F INLINEFORM0 measure It should be noted that this is despite the fact that the system proposed in BIBREF17 uses much larger training data than our system We qualitatively analyze the output of our neural networkenhanced systems against the outputs produced by our baseline system We have included some examples in Table TABREF25 and the corresponding outputs of the baseline system and the reference sentences The selected examples show that NNGLM and NNJM choose appropriate words by making use of the surrounding context effectively Note that our neural networks which rely on fixed source and target vocabulary map the rare words and misspelled words to the UNK token Therefore phrases with the UNK token may get a higher probability than they actually should due to the large number of UNK tokens seen during training This leads to fewer spelling error corrections compared to the baseline system which does not employ these neural networks Consider the following example from the test data numerous profitdriven companies realize the hugh huge human traffic on such social media sites The spelling error hugh INLINEFORM0 huge is corrected by the baseline system but not by our final system with the neural networks This is because the misspelled word hugh is not in the neural network vocabulary and so it is mapped to the UNK token The sentence with the UNK token gets a higher score and hence the system chooses this output over the correct one From our experiments and analysis we see that NNGLM and NNJM capture contextual information better than regular translation models and language models This is because they make use of larger source sentence contexts and continuous space representation of words This enables them to make better predictions compared to traditional translation models and language models We also observed that our system has an edge over the baseline for correction of word choice and collocation errors Our experiments show that using the two neural network translation models improves the performance of a phrasebased SMT approach to GEC To the best of our knowledge this is the first work that uses these two neural network models for SMTbased GEC The ability of neural networks to model words and phrases in continuous space and capture nonlinear relationships enables them to generalize better and make more accurate grammatical error correction We have achieved stateoftheart results on the CoNLL 2014 shared task test dataset This has been done without using any additional training data compared to the best performing systems evaluated on the same dataset This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013T21150 </s>
<s>[INST] Context-aware Deep Model for Entity Recommendation in Search Engine at Alibaba [/INST] Over the past few years major commercial search engines have enriched and improved the user experience by proactively presenting related entities for a query along with the regular web search results Figure FIGREF3 shows an example of Alibaba ShenMa search engines entity recommendation results presented on the panel of its mobile search result page Existing studies BIBREF0 BIBREF1 in entity recommendation typically consider the query containing explicit entities while ignoring those queries without entities A main common drawback of these approaches is that they cannot handle well the complex queries because they do not have informative evidence other than the entity itself for retrieving related entities with the same surface form Therefore existing entity recommendation systems tend to recommend entities with regard to the explicitly asked meaning ignoring those queries with implicit user needs Through analyzing hundreds of million unique queries from search logs with named entity recognition technology we have found that more than 50 of the queries do not have explicit entities In our opinion those queries without explicit entities are valuable for entity recommendation The queries convey insights into a users current information need which enable us to provide the user with more relevant entity recommendations and improve user experience For example a users search intent behind the query what food is good for cold weather could be a kind of food suitable to eat in cold weather However most of the entities recommended for the query are mainly based on entities existed in the query such as given the query cake and recommended those entities cupcakes chocolate and so on and there is no explicit entity called good food for cold weather at all It is very likely that the user is interested in the search engine that is able to recommend entities with arbitrary queries However recommending entities with such complex queries is extremely challenging At first many existing recommendation algorithms proven to work well on small problems but fail to operate on a large scale Highly specialized distributed learning algorithms and efficient serving systems are essential for handling search engines massive queries and candidate entities Secondly user queries are extremely complex and diverse and it is quite challenging to understand the users true intention Furthermore historical user behavior on the search engine is inherently difficult to predict due to sparsity and a variety of unobservable external factors We rarely obtain the ground truth of user satisfaction and instead model noisy implicit feedback signals In this paper we study the problem of contextaware entity recommendation and investigate how to utilize the queries without explicit entities to improve the entity recommendation quality Our approach is based on neural networks which maps both queries and candidate entities into vector space via largescale distributed training We evaluate our approach using largescale realworld search logs of a widely used commercial Chinese search engine Our system has been deployed in ShenMa Search Engine and you can experience this feature in UC Browser of Alibaba Results from online AB test involving a large number of real users suggest that the impression efficiency of clickthrough rate CTR increased by 51 and page view PV increased by 55 The main contributions of our paper are summarized as follows To the best of our knowledge we are the first approach to recommend entities for arbitrary queries in largescale Chinese search engine Our approach is flexible capable of recommending entities for billions of queries We conduct extensive experiments on largescale realworld search logs which shows the effectiveness of our approach in both offline evaluation and online AB test Previous work that is closest to our work is the task of entity recommendation Entity recommendation can be categorized into the following two categories First for query assistance for knowledge graphs BIBREF2 BIBREF3 GQBE BIBREF4 and Exemplar Queries BIBREF5 studied how to retrieve entities from a knowledge base by specifying example entities For example the input entity pair Jerry Yang Yahoo would help retrieve answer pairs such as Sergey Brin Google Both of them projected the example entities onto the RDF knowledge graph to discover result entities as well as the relationships around them They used an edgeweighted graph as the underlying model and subgraph isomorphism as the basic matching scheme which in general is costly Second to recommend related entities for search assistance BIBREF0 proposed a recommendation engine called Spark to link a users query word to an entity within a knowledge base and recommend a ranked list of the related entities To guide user exploration of recommended entities they also proposed a series of features to characterize the relatedness between the query entity and the related entities BIBREF6 proposed a similar entity search considering diversity BIBREF7 proposed to enhance the understandability of entity recommendations by captioning the results BIBREF8 proposed a number of memorybased methods that exploit user behaviors in search logs to recommend related entities for a users full search session BIBREF1 propose a model in a multitask learning setting where the query representation is shared across entity recommendation and contextaware ranking However none of those approaches take into account queries without entities Our objective is to infer entities given diverse and complex queries for search assistance Actually there are little research papers that focus on this issue In industry there are three simple approaches to handle those complex queries One is tagging the query and then recommend the relevant entities based on those tags However the tagging space is so huge that it is difficult to cover all domains The second method is to use the query recommendation algorithm to convert and disambiguate the queries into entities ignoring effect of error transmission from query recommendation The last approach is to recall entities from the clicked documents However not all queries have clicked documents To the best of our knowledge we are the first endtoend method that makes it possible to recommend entities with arbitrary queries in large scale Chinese search engine The overall structure of our entity recommendation system is illustrated in Figure FIGREF7 The system is composed of three modules query processing candidate generation and ranking The query processing module at first preprocesses the queries extract entities cannot extract any entities for complex queries and then conceptualize queries The candidate generation module takes the output of query processing module as input and retrieves a subset hundreds of entities from the knowledge graph For a simple query with entities we utilize heterogeneous graph embedding BIBREF9 to retrieve relative entities For those complex queries with little entities we propose a deep collaborative matching model to get relative entities These candidates are intended to be generally relevant to the query with high recall The candidate generation module only provides broad relativity via multicriteria matching The similarity between entities is expressed in terms of coarse features Presenting a few best recommendations in a list requires a finelevel representation to distinguish relative importance among candidates with high precision The ranking module accomplishes this task by type filtering learning to rank and clickthrough rate estimation We also utilize online learning algorithm including Thompson sampling to balance the exploitation and exploration in entity ranking In the final product representation of entity recommendation we utilize the concept of entities to cluster the different entities with the same concept in the same group to represent a better visual display and provide a better user experience In this paper we mainly focus on candidate generation the first stage of entity recommendation and present our approach red part in Figure FIGREF7 which can handle complex queries In this section we describe the large knowledge graph that we use to retrieve candidate entities and cognitive concept graph that we use to conceptualize queries and entities Shenma knowledge graph is a semantic network that contains ten million of entities thousand types and billions of triples It has a wide range of fields such as people education film tv music sports technology book app foodplant animal and so on It is rich enough to cover a large proportion of entities about worldly facts Entities in the knowledge graph are connected by a variety of relationships Based on Shenma knowledge graph we also construct a cognitive concept graph which contains millions of instances and concepts Different from Shenma knowledge graph cognitive concept graph is a probabilistic graph mainly focus on the IsA relationship For example robin isa bird and penguin isa bird Cognitive concept graph is helpful in entity conceptualization and query understanding In this section we first introduce the basics of the deep collaborative match and then elaborate on how we design the deep model architecture Traditionally major search engines recommend related entities based on their similarities to the main entity that the user searched BIBREF1 have detailed explained the procedure of entity recommendation in the search engine including entity linking related entity discovery and so on Unlike traditional methods we regard recommendation as largescale multiclassification where the prediction problem becomes how to accurately classify a specific entity ei among millions of entities from a knowledge graph V based on a users input query Q where qin mathbb RN is a highdimensional embedding of the users input query ujin mathbb RN represents each entity embedding and V is the entities from knowledge graph In this setting we map the sparse entity or query into a dense vector in mathbb RN Our deep neural model tries to learn the query embedding via the users history behavior which is useful for discriminating among entities with a softmax classier Through joint learning of entity embeddings and query embeddings the entity recommendation becomes the calculation of cosine similarity between entity vectors and query vectors Inspired by skipgram language models BIBREF10 we map the users input query to a dense vector representation and learn high dimensional embedding for each entity in a knowledge graph Figure FIGREF13 shows the architecture of the base deep match model Input Layer Input layer mainly contains the features from the input query we first use word segmentation tool to segment queries then fetch basic level tokens and semantic level tokens and finally combine all the input features via the embedding technique as shown below word embedding averaging the embedding of both the basic level tokens and semantic level tokens and the final embedding dimension is 128 ngram embedding inspired by fasttext BIBREF11 we add ngram n23 features to the input layer to import some local temporal information The dimension of ngram embedding is also 128 FullyConnected Layer Following the input layer we utilize three fully connected layers 512256128 with tanh activation function In order to speed up the training we add batch normalization to each layer Softmax Layer To efficiently train such a model with millions of classes we apply sampled softmax BIBREF12 in our model For each example the crossentropy loss is minimized for the true label and the sampled negative classes In practice we sample 5000 negatives instances Online Serving At the serving time we need to compute the most likely K classes entities in order to choose the top K to present to the user In order to recall the given number of entities within ten milliseconds we deploy the vector search engine under the offline building index In practice our model can generate query embedding within 5ms and recall related entities within 3ms The above base model also remains two problems of on the semantic representation of the input query 1 ignoring the global temporal information which is important for learning querys sentencelevel representation 2 different query tokens contribute equally to the final input embedding which is not a good hypnosis For example the entity token should be more important than other tokens such as stop words To address the first issue we adopt the Bidirectional LSTM model to encode the global and local temporal information At the same time with the attention mechanism our model can automatically learn the weights of different query tokens Figure FIGREF20 shows the enhanced deep match model architecture The proposed model consists of two parts The first is a Bidirectional LSTM and the second is the selfattention mechanism which provides weight vectors for the LSTM hidden states The weight vectors are dotted with the LSTM hidden states and the weighted LSTM hidden states are considered as an embedding for the input query Suppose the input query has n tokens represented with a sequence of word embeddings where wiin mathbb Rd is the word embedding for the ith token in the query Qin mathbb Rn times d is thus represented as a 2D matrix which concatenates all the word embeddings together To utilize the dependency between adjacent words within a single sentence we use the Bidirectional LSTM to represent the sentence and concatenate hif with hib to obtain the hidden state hi The number of LSTMs hidden unit is m For simplicity we concatenate all the hidden state hi as H in mathbb Rn times 2m Hh1h2cdots hn1hn With the selfattention mechanism we encode a variable length sentence into a fixed size embedding The attention mechanism takes the whole LSTM hidden states H as input and outputs the weights alpha in mathbb R1 times k where Win mathbb Rk times 2mUin mathbb R1 times kbin mathbb Rk Then we sum up the LSTM hidden states H according to the weight provided by alpha to get the final representation of the input query Note that the query embeddings and entity embeddings are all random initialized and trained from scratch We have huge amounts of training data which is capable of modeling the relativity between queries and entities In this section we illustrate how to generate the training samples to learn the queryentity match model Training samples are generated from query logs and knowledge graph which can be divided into four parts as shown below QueryClickEntity given a query choose the clicked entities with relatively high CTR In practice we collect thousand millions of data from the query logs in the past two months QueryDocEntity we assume that high clicked doc is well matched to the query and the entities in title or summary are also related to the query The procedure is 1 we first fetch the clicked documents with title and summary from the query log 2 extract entities from title and summary via name entity recognition 3 keep those highquality entities At last we collect millions of unique queries QueryQueryEntity given the text recommendations well results we utilize the entity linking method to extract entities from those results We also collect millions of unique queries QueryTagEntity as to some specific queries we will tag entity label to them and generate queryentity pairs Here we define hundreds of entity tags in advance After generating of queryentity pairs we adopt the following data prepossessing procedures lowquality filter We filter lowquality entities via some basic rules such as blacklist authority hotness importance and so on lowfrequency filter We filter lowfrequency entities highfrequency subsampling We make subsampling to those highfrequency entities shuffle We shuffle all samples Apart from user clicked data we construct millions of queryentity relevant pairs at the semantic level which are very important for the model to learn the querys semantic representation Finally we generate billions of queryentity pairs and about one thousand billion unique queries To evaluate the effectiveness of different methods we use PrecisionM following BIBREF13 Derive the recalled set of entities for a query u as Pu Pu M and the querys ground truth set as Gu PrecisionM are To evaluate the performance of our model we compare its performance with various baseline models From unseen and real online search click log we collect millions of queryentity pairs as our test set ground truth set The evaluation results are shown in Table TABREF31 DNN BIBREF14 is the base method with a DNN encoder ngram is method adding ngram features attBiLSTM is our method with BiLSTM encoder with attention mechanism The DNN BIBREF14 is a very famous recommendation baseline and we reimplement the algorithm and modify the model for entity recommendation setting Note that there are no other baselines of entity recommendation for complex queries with no entities at all attBiLSTM is slightly better than ngram The reasons are mainly that a certain percentage of queries is without order and ngram is enough to provide useful information Our approach achieves the comparable results in the offline evaluation These results indicate that our method benefits a lot from joint representation learning in queries and entities Note that we learn the embedding of queries and entities with random initialization We believe the performance can be further improved by adopting more complex sentence encoder such as BERTBIBREF15 and XLNetBIBREF16 and inductive bias from structure knowledgeBIBREF17 to enhance the entity representation which we plan to address in future work We perform largescale online AB test to show how our approach on entity recommendation helps with improving the performance of recommendation in realworld applications We first retrieve candidate entities by matching queries then we rank candidate entities by a clickthrough rate CTR prediction model and Thompson sampling The ranked entities are pushed to users in the search results of Alibaba UC Browser For online AB test we split users into buckets We observe and record the activities of each bucket for seven days We select two buckets with highly similar activities For one bucket we perform recommendation without the deep collaborative match model For another one the deep collaborative match model is utilized for the recommendation We run our AB test for seven days and compare the result The page view PV and clickthrough rate CTR are the two most critical metrics in realworld application because they show how many contents users read and how much time they spend on an application In the online experiment we observe a statistically significant CTR gain 51 and PV 55 These observations prove that the deep collaborative match for entity recommendation greatly benefits the understanding of queries and helps to match users with their potential interested entities better With the help of a deep collaborative match we can better capture the contained implicit users need in a query even if it does not explicitly have an entity Given more matched entities users spend more times and reading more articles in our search engine We make a qualitative analysis of the entity embeddings learned from scratch Interestingly we find that our approach is able to capture the restiveness of similar entities As Figure FIGREF26 shows the entities Beijing University Fudan University are similar to the entity Tsinghua University Those results demonstrate that our approachs impressive power of representation learning of entities It also indicates that the text is really helpful in representation learning in knowledge graph We also make a qualitative analysis of the query embeddings We find that our approach generates more discriminate query embedding for entity recommendation due to the attention mechanisms Specifically we randomly selected six queries from the search log and then visualize the attention weights as shown in Figure FIGREF37 Our approach is capable of emphasizing those relative words and deemphasizing those noisy terms in queries which boost the performance We give some examples of how our deep collaborative matching takes effect in entity recommendation for those complex queries In Figure FIGREF36 we display the most relative entities that are retrieved from the given queries We observe that 1 given the interrogative query what food is good for cold weather our model is able to understand the meaning of query and get the most relative entities Grain nutrition powder Almond milk 2 our model is able to handle short queries such as e52640 and i73770s which usually do not have the syntax of a written language or contain little signals for statistical inference 3 our model is able to infer some queries such as multiply six by the largest single digit greater than fourth that need commonsense number is mathematical terms which demonstrate the generalization of our approach 4 our approach can also handle multimodal queries the picture of baby walking feet outside and get promising results although in recent version of our model we do not consider the image representation in entity recommendation which indicates that our approach can model the presentation of queries which reveal the implicit need of users We believe the multimodal information images will further boost the performance which will be left for our future work In the entity recommendation system each entity may have different views For example when recommending entities relative to apple it may represent both fruits and technology products as the Figure FIGREF42 shows Actually different users have different intentions To give a better user experience we develop the conceptualized multidimensional recommendation shown in Figure FIGREF43 To be specific we utilize the concepts of candidate entities to cluster the entities in the same group to give a better visual display Those concepts are retrieved from our cognitive concept graph Online evaluation shows that conceptualized multidimensional recommendation has the total coverage of 498 in entity recommendation and also achieve more than 41 gain of CTR In this paper we study the problem of context modeling for improving entity recommendation To this end we develop a deep collaborative match model that learns representations from complex and diverse queries and entities We evaluate our approach using largescale realworld search logs of a widely used commercial search engine The experiments demonstrate that our approach can significantly improve the performance of entity recommendation Generally speaking the knowledge graph and cognitive concept graph can provide more prior knowledge in query understanding and entity recommendation In the future we plan to explore the following directions 1 we may combine our method with structure knowledge from knowledge graph and cognitive concept graph 2 we may combine rule mining and knowledge graph reasoning technologies to enhance the interpretability of entity recommendation 3 it will be promising to apply our method to other industry applications and further adapt to other NLP scenarios We would like to thank colleagues of our team Xiangzhi Wang Yulin Wang Liang Dong Kangping Yin Zhenxin Ma Yongjin Wang Qiteng Yang Wei Shen Liansheng Sun Kui Xiong Weixing Zhang and Feng Gao for useful discussions and supports on this work We are grateful to our cooperative team search engineering team We also thank the anonymous reviewers for their valuable comments and suggestions that help improve the quality of this manuscript </s>
<s>[INST] Rapid Classification of Crisis-Related Data on Social Networks using Convolutional Neural Networks [/INST] Timecritical analysis of social media data streams is important for many application areas For instance responders to humanitarian disasters eg earthquake flood need information about the disasters to determine what help is needed and where This information usually breaks out on social media before other sources During the onset of a crisis situation rapid analysis of messages posted on microblogging platforms such as Twitter can help humanitarian organizations like the United Nations gain situational awareness learn about urgent needs of affected people at different locations and decide on actions accordingly BIBREF0 BIBREF1 Artificial Intelligence for Disaster Response AIDR is an online platform to support this cause BIBREF2 During a disaster any person or organization can use it to collect tweets related to the event The total volume of all tweets can be huge about 350 thousand tweets per minute Filtering them using keywords helps cut down this volume to some extent But identifying different kinds of useful tweets that responders can act upon cannot be achieved using only keywords because a large number of tweets may contain the keywords but are of limited utility for the responders The bestknown solution to address this problem is to use supervised classifiers that would separate useful tweets from the rest Classifying tweets to identify their usefulness is difficult because tweets are short only 140 characters and therefore hard to understand without enough context they often contain abbreviations informal language and are ambiguous and finally determining whether the tweet is useful in a disaster situation and identifying required actions for relief operations is a hard task because of its subjectivity Individuals differ on their judgement about whether a tweet is useful or not and sometimes whether they belong to one topical class or another especially when there is information in a tweet that would be classified into multiple topical classes Given this ambiguity a computer cannot agree with annotators at a rate that is higher than the rate at which the annotators agree with each other Despite advances in natural language processing NLP interpreting the semantics of the short informal texts automatically remains a hard problem To classify disasterrelated tweets traditional classification approaches use batch learning with discrete representation of words This approach has three major limitations First in the beginning of a disaster situation there is no event labeled data available for training Later the labeled data arrives in small batches depending on the availability of geographically dispersed volunteers These learning algorithms are dependent on the labeled data of the event for training Due to the discrete word representations they perform poor when trained on the data from previous events outofevent data The second limitation is the offline learning style that inputs the complete labeled data and train a model This is computational expensive in a disaster situation where labeled data is coming in batches One would need to train a classifier from scratch every time a new batch of labeled data arrives Thirdly these approaches require to manually engineered features like cue words and TFIDF vectors BIBREF3 for learning Deep neural networks DNNs are based on online learning mechanism and have the flexibility to adaptively learn from new batches of labeled data without requiring to retrain from scratch Due to their distributed word representation they generalize well and make better use of the previously labeled data from other events to speed up the classification process in the beginning of a disaster DNNs automatically learn latent features as distributed dense vectors which generalize well and have shown to benefit various NLP tasks BIBREF4 BIBREF5 BIBREF6 BIBREF7 In this paper we propose a convolutional neural network CNN for the classification task CNN captures the most salient n gram information by means of its convolution and maxpooling operations On top of the typical CNN we propose an extension that combines multilayer perceptron with a CNN We present a series of experiments using different variations of the training data event data only outofevent data only and a concatenation of both Experiments are conducted for binary and multiclass classification tasks For the event only binary classification task the CNN model outperformed in four out of five tasks with an accuracy gain of up to 45 absolute points In the scenario of no event data the CNN model shows substantial improvement of up to 18 absolute points over the several nonneural models This makes the neural network model an ideal choice in early hours of a disaster for tweet classification When combined the event data with outofevent data we see similar results as in the case of event only training For multiclass classification the CNN model outperformed in similar fashion as in the case of binary classification Our variation of the CNN model with multilayer perceptron MLPCNN performed better than its CNN counter part In some cases adding outofevent data drops the performance To reduce the effect of large outofevent data and to make the most out of it we apply a simple event selection technique based on TFIDF and select only those events that are most similar to the event under consideration We then train the classifiers on the concatenation of the event plus selected outofevent data The performance improves only for the event with small event data To summarize we show that neural network models can be used reliably with the already available outofevent data for binary and multiclass classification The automatic feature learning capabilities brings an additional value on top of nonneural classification methods The MLPCNN results show that there is still a roam for improvement on top of the best accuracy achieved The rest of the paper is organized as follows We summarize related work in Section Related Work Section Convolutional Neural Network presents the convolutional neural model In Section Experimental Settings we describe the dataset and training settings of the models In Section Results presents our results and analysis We conclude and discuss future work in Section Conclusion and Future Work Studies have analyzed how Twitter can be useful during major disasters so as to gain insight into the situation as it unfolds BIBREF0 BIBREF8 BIBREF9 A number of systems have been developed to classify extract and summarize crisisrelevant information from social media for a detailed survey see BIBREF3 Cameron et al describe a platform for emergency situation awareness BIBREF10 They classify interesting tweets using an SVM classifier Verma et al use Naive Bayes and MaxEnt classifiers to find situational awareness tweets from several crises BIBREF11 Imran et al implemented AIDR to classify a Twitter data stream during crises BIBREF2 They use a random forest classifier in an offline setting After receiving every minibatch of 50 training examples they replace the older model with a new one There is growing interest in recent years for DNNs and word embeddings with application to myriad of NLP problems The emergence of tools such as word2vec BIBREF6 and GloVe BIBREF12 have enabled NLP researchers to learn word embeddings efficiently and use them to train better models Collobert et al BIBREF4 presented a unified DNN architecture for solving various NLP tasks including partofspeech tagging chunking named entity recognition and semantic role labeling They showed that DNNs outperform traditional models in most of the tasks They also proposed a multitask learning framework for solving the tasks jointly Kim BIBREF13 and Kalchbrenner et al BIBREF14 used convolutional neural networks CNN for sentencelevel classification tasks eg sentimentpolarity classification question classification and showed that CNNs outperform traditional methods eg SVMs MaxEnts Despite these recent advancements the application of CNNs to disaster response is novel to the best of our knowledge In order to classify short and noisy Twitter messages effectively possibly in the absence of any inevent training data a classification model should use a distributed representation of words which results in improved generalization and should learn the key features at different levels of abstraction automatically To this end we use a Convolutional Neural Network CNN as proposed by Kim BIBREF13 Figure 1 demonstrates how a CNN works with an example tweet guys if know any medical emergency around balaju area you can reach umesh HTTP doctor at HTTP HTTP Each word in the vocabulary V is represented by a D dimensional vector in a shared lookup table L in V times D L is considered a model parameter to be learned We can initialize L randomly or using pretrained word embedding vectors like word2vec BIBREF15 Given an input tweet mathbf s w1 cdots wT we first transform it into a feature sequence by mapping each word token wt in mathbf s to an index in L The lookup layer then creates an input vector mathbf xtin D for each token wt which are passed through a sequence of convolution and pooling operations to learn highlevel feature representations A convolution operation involves applying a filter mathbf u in LD to a window of L words to produce a new feature ht fmathbf u mathbf xttL1 bt Eq 4 where mathbf xttL1 denotes the concatenation of L input vectors bt is a bias term and f is a nonlinear activation function eg tanh We apply this filter to each possible L word window in the tweet to generate a feature map mathbf hi h1 cdots hTL1 We repeat this process N times with N different filters to get N different feature maps ie L0 We use a wide convolution BIBREF14 as opposed to narrow which ensures that the filters reach the entire sentence including the boundary words This is done by performing zeropadding where outofrange L1 L2 1 or L3 L4 L5 vectors are assumed to be zero After the convolution we apply a maxpooling operation to each feature map mathbf m mu pmathbf h1 cdots mu pmathbf hN Eq 5 where mu pmathbf hi refers to the max operation applied to each window of p features in the feature map mathbf hi For instance with p2 this pooling gives the same number of features as in the feature map because of the zeropadding Intuitively the filters compose local n grams into higherlevel representations in the feature maps and maxpooling reduces the output dimensionality while keeping the most important aspects from each feature map Since each convolutionpooling operation is performed independently the features extracted become invariant in locations ie where they occur in the tweet thus acts like bagof n grams However keeping the order information could be important for modeling sentences In order to model interactions between the features picked up by the filters and the pooling we include a dense layer of hidden nodes on top of the pooling layer mathbf z fVmathbf m mathbf bh Eq 6 where V is the weight matrix mathbf bh is a bias vector and f is a nonlinear activation The dense layer naturally deals with variable sentence lengths by producing fixed size output vectors mathbf z which are fed to the final output layer for classification Formally the output layer defines a Bernoulli distribution pymathbf s theta y mathbf wT mathbf z b Eq 7 where refers to the sigmoid function and mathbf w are the weights from the dense layer to the output layer and b is a bias term We fit the models by minimizing the crossentropy between the predicted distributions hatyntheta pynmathbf sn theta and the target distributions yn ie the gold labels Jtheta hspace00pt sum n yn log hatyntheta 1yn log left1 hatyntheta right Eq 9 In our CNN model we intend to avoid manual feature engineering efforts by using word embeddings as the only features As mentioned before we can initialize the embeddings L randomly and learn them as part of model parameters by backpropagating the errors to the lookup layer One issue with random initialization is that it may lead the training algorithm to get stuck in local minima On the other hand one can plug the readily available embeddings from external sources eg Google embeddings BIBREF15 in the CNN model and use them as features without tuning them further for the task as is done in any other machine learning model However this approach does not exploit the automatic feature learning capability of NN models which is one of the main motivations of using them In our work we use the pretrained word embeddings to better initialize our models and we finetune them for our task in training which turns out to be beneficial More specifically we initialize the word vectors in L in two different ways 1 Google Embedding Mikolov et al BIBREF15 propose two loglinear models for computing word embeddings from large unlabeled corpus efficiently a bagofwords model CBOW that predicts the current word based on the context words and a skipgram model that predicts surrounding words given the current word They released their pretrained 300dimensional word embeddings vocabulary size 3 million trained by the skipgram model on part of Google news dataset containing about 100 billion words 2 Crisis Embedding Since we work on disaster related tweets which are quite different from news we have also trained domainspecific embeddings vocabulary size 20 million using the Skipgram model of word2vec tool BIBREF6 from a large corpus of disaster related tweets The corpus contains 57908 tweets and 94 million tokens For comparison with Google we learn word embeddings of 300dimensions Although CNNs learn word features ie embeddings automatically we may still be interested in incorporating other sources of information eg TFIDF vector representation of tweets to build a more effective model Additional features can also guide the training to learn a better model However unlike word embeddings we want these features to be fixed during training This can be done in our CNN model by creating another channel which feeds these additional features directly to the dense layer In that case the dense layer in Equation 6 can be redefined as mathbf z fVprime mathbf mprime mathbf bh Eq 13 where mathbf mprime mathbf my is a concatenated column vector of feature maps mathbf m and additional features mathbf y and Vprime is the associated weight matrix Notice that by including this additional channel this network combines a multilayer perceptron MLP with a CNN In this section we first describe the dataset that is used for the classification task We then present the TFIDF based features which are used to train the nonneural classification algorithms In the end we describe the model settings and training settings of nonneural and neural classification models We use data from multiple sources 1 CrisisLex BIBREF16 2 CrisisNLP BIBREF17 and 3 AIDR BIBREF2 The first two sources have tweets posted during several humanitarian crises and labeled by paid workers The AIDR data consists of tweets from several crises events labeled by volunteers The dataset consists of various event types such as earthquakes floods typhoons etc In all the datasets the tweets are labeled into various informative classes eg urgent needs donation offers infrastructure damage dead or injured people and one notrelated or irrelevant class Table 1 provides a one line description of each class and also the total number of labels from all the sources Other useful information and Not related or irrelevant are the most frequent classes in the dataset Table 2 shows statistics about the events we use for our experiments In order to access the difficulty of the classification task we calculate the interannotator agreement IAA scores of the datasets obtained from CrisisNLP The California Earthquake has the highest IAA of 085 and Typhoon Hagupit has the lowest IAA of 070 in the events underconsideration The IAA of remaining three events are around 075 We aim to reach these levels of accuracy Data Preprocessing We normalize all characters to their lowercased forms truncate elongations to two characters spell out every digit to D all twitter usernames to userID and all URLs to HTTP We remove all punctuation marks except periods semicolons question and exclamation marks We further tokenize the tweets using the CMU TweetNLP tool BIBREF18 Data Settings For a particular event such as Nepal earthquake data from all other events underconsideration plus All others see Table 2 are referred to as outofevent data We divide each event dataset into train 70 validation 10 and test sets 20 using skilearn toolkits module BIBREF19 which ensured that the class distribution remains reasonably balanced in each subset Feature Extraction We extracted unigram bigram and trigram features from the tweets as features The features are converted to TFIDF vectors by considering each tweet as a document Note that these features are used only in nonneural models The neural models take tweets and their labels as input For SVM classifier we implemented feature selection using Chi Squared test to improve estimators accuracy scores To compare our neural models with the traditional approaches we experimented with a number of existing models including Support Vector Machine SVM a discriminative maxmargin model Logistic Regression LR a discriminative probabilistic model and Random Forest RF an ensemble model of decision trees We use the implementation from the scikitlearn toolkit BIBREF19 All algorithms use the default value of their parameters We train CNN models by optimizing the cross entropy in Equation 7 using the gradientbased online learning algorithm ADADELTA BIBREF20 The learning rate andparameters were set to the values as suggested by the authors Maximum number of epochs was set to 25 To avoid overfitting we use dropout BIBREF21 of hidden units and early stopping based on the accuracy on the validation set We experimented with lbrace 00 02 04 05rbrace dropout rates and lbrace 32 64 128rbrace minibatch sizes We limit the vocabulary V to the most frequent P Pin lbrace 80 85 90rbrace words in the training corpus The word vectors in L were initialized with the pretrained embeddings See Section Word Embedding and Finetuning We use rectified linear units ReLU for the activation functions f lbrace 100 150 200rbrace filters each having window size L of lbrace 2 3 4rbrace pooling length p of lbrace 23 4rbrace and lbrace 100 150 200rbrace dense layer units All the hyperparameters are tuned on the development set For each event under consideration we train classifiers on the event data only on the outofevent data only and on a combination of both We conduct experiments for the binary and multiclass classification task For former we merge all informative classes to create one general Informative one We initialized the CNN model using two types of pretrained word embeddings Crisis Embeddings CNN I trained on all tweet crisis data Google Embeddings CNN II trained on the Google News dataset The CNN model then finetuned the embeddings using the training data Table 3 left presents the results of binary classification comparing several nonneural classifier with the CNNbased classifier For the scenario of training on the event only data CNNs performed better than all nonneural classifiers in every event The CNN performed substantially better than the nonneural model SVM by a margin of up to 4 when trained on the outofevent data only This shows robustness of the CNN model in learning from the outofevent data when no event data is available This can be very helpful during the early hours of a crisis when no eventspecific labeled data is available When combined event data with outofevent data CNN also performed better than the nonneural models However comparing different data settings for CNN we saw mixed results In most of the cases the performance dropped in comparison to the event only training The large size of the outofevent data down weights the benefits of the event data and skewed the probability distribution towards the outofevent data Table 3 right presents confusion matrix of SVM and CNN I classifiers trained and evaluated on the Nepal earthquake data SVM prediction is inclined towards Informative class whereas CNN predicted more instances as noninformative than informative In the case of outofevent training SVM predicted most of the instances as informative Thus achieved high recall but very low precision CNN on the other hand achieved quite balanced precision and recall To summarize the neural network based classifier outperformed nonneural classifiers in all data settings The performance of the models trained on outofevent data are as expected lower than that in the other two training settings However in case of the CNN models the results are reasonable to the extent that outofevent data can be used to predict tweets informativeness when no event data is available It is also worth mentioning here that aside from outperforming nonneural classifiers neural network models do not require any feature engineering and learn features automatically from the data Comparing CNN I with CNN II we did not see any system consistently better than other For further experimentation and comparison we only consider the CNN I trained on crisis embedding For the purpose of multiclass classification we mainly compare the performance of two variations of the CNNbased classifier CNN I and MLPCNN I combining multilayer perception and CNN against an SVM classifier All labeled data from other events may not be useful for the event underconsideration Based on this intuition we apply a data selection technique to select the data from outofevents that is most liked by the event To achieve this we trained a classifier on the event data only and predicted the label of each tweet of the outofevent data We selected tweets that are correctly predicted by the event classifier and added them M eventadapt to the event data for the training of the classifier for multiclass classification task Table 4 summarizes the accuracy and macro F1 scores of the multiclass classification task Similar to the results of binary classification task the CNN model outperformed SVM in almost data settings The most promising results are combing MLP and CNN really improve performance of our system The results on training with event plus outofevent data did not have a clear improvement over the event only model The results dropped slightly in some cases In addition when using simple domain adaptation technique we get rid of the noisy tweets from outofevent data and improve the performance of our models The M eventadapt system in Table 4 shows the results The domain adapted system consistently performed better than its respective baseline and with the system trained on event data only The macro F1 scores show the per class performance of each classifier The Nepal earthquake has the lowest per class classification performance This could be due to the imbalance in the training data as can be seen in Table 2 In Table 5 we show the confusion matrix of the CNNMLP model tested on the Nepal Earthquake data The class Other useful information is the second largest class in the data The classifier got biased and learned to predict other tweets in this class Not related or irrelevant is the most confused class among all classes The precisionrecall curve in Table 5 shows that Not related or irrelevant and Donation and Volunteering are easier to be predicted than other classes Their AUC scores are 087 and 070 respectively Meanwhile AUC scores of Other useful information and Infrastructure and utilities class are lower than the random 05 level We presented a deep neural network model for binary and multiclass classification tasks and showed that one can reliably use outofevent data for the classification of new event when no inevent data is available A convolution neural network model performed consistently for all five events under consideration and worked better than all nonneural models in most of the cases The performance of the classifiers degraded when outofevent training samples was added to training samples from event data Thus we recommend using outofevent training data during the first few hours of a disaster only after which the training data related to the event should be used In future we would like to improve the performance of the system using domain adaptation by either model weighting separately building models for event and outofevent and combining them or by data selection or intelligently selecting the outofevent data that is best suited the event data </s>
<s>[INST] Information-Theoretic Probing for Linguistic Structure [/INST] Neural networks are the backbone of modern stateoftheart Natural Language Processing NLP systems One inherent byproduct of training a neural network is the production of realvalued representations Many speculate that these representations encode a continuous analogue of discrete linguistic properties eg partofspeech tags due to the networks impressive performance on many NLP tasks BIBREF0 As a result of this speculation one common thread of research focuses on the construction of probes ie supervised models that are trained to extract the linguistic properties directly BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 A syntactic probe then is a model for extracting syntactic properties such as partofspeech from the representations BIBREF6 In this work we question what the goal of probing for linguistic properties ought to be Informally probing is often described as an attempt to discern how much information representations encode about a specific linguistic property We make this statement more formal We assert that the goal of probing ought to be estimating the mutual information BIBREF7 between a representationvalued random variable and a linguistic propertyvalued random variable This formulation gives probing a clean informationtheoretic foundation and allows us to consider what probing actually means Our analysis also provides insight into how to choose a probe family We show that choosing the highestperforming probe independent of its complexity is optimal for achieving the best estimate of mutual information MI This contradicts the received wisdom that one should always select simple probes over more complex ones BIBREF8 BIBREF9 BIBREF10 In this context we also discuss the recent work of hewittliang2019designing who propose selectivity as a criterion for choosing families of probes hewittliang2019designing define selectivity as the performance difference between a probe on the target task and a control task writing the selectivity of a probe puts linguistic task accuracy in context with the probes capacity to memorize from word types They further ponder when a probe achieves high accuracy on a linguistic task using a representation can we conclude that the representation encodes linguistic structure or has the probe just learned the task Informationtheoretically there is no difference between learning the task and probing for linguistic structure as we will show thus it follows that one should always employ the best possible probe for the task without resorting to artificial constraints In support of our discussion we empirically analyze wordlevel partofspeech labeling a common syntactic probing task BIBREF6 BIBREF11 within our framework Working on a typologically diverse set of languages Basque Czech English Finnish Tamil and Turkish we show that the representations from BERT a common contextualized embedder only account for at most 5 more of the partofspeech tag entropy than a control These modest improvements suggest that most of the information needed to tag partofspeech well is encoded at the lexical level and does not require the sentential context of the word Put more simply words are not very ambiguous with respect to part of speech a result known to practitioners of NLP BIBREF12 We interpret this to mean that partofspeech labeling is not a very informative probing task We also remark that formulating probing informationtheoretically gives us a simple but stunning result contextual word embeddings eg BERT BIBREF13 and ELMo BIBREF14 contain the same amount of information about the linguistic property of interest as the original sentence This follows naturally from the dataprocessing inequality under a very mild assumption What this suggests is that in a certain sense probing for linguistic properties in representations may not be a well grounded enterprise at all Following hewittliang2019designing we consider probes that examine syntactic knowledge in contextualized embeddings These probes only consider a single tokens embedding and try to perform the task using only that information Specifically in this work we consider partofspeech POS labeling determining a words part of speech in a given sentence For example we wish to determine whether the word love is a noun or a verb This task requires the sentential context for success As an example consider the utterance love is blind where only with the context is it clear that love is a noun Thus to do well on this task the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS Let S be a random variable ranging over all possible sequences of words For the sake of this paper we assume the vocabulary mathcal V is finite and thus the values S can take are in mathcal V We write mathbf sin S as mathbf s w1 cdots wmathbf s for a specific sentence where each wi in mathcal V is a specific word in the sentence and the position i in mathbb N We also define the random variable W that ranges over the vocabulary mathcal V We define both a sentencelevel random variable S and a wordlevel random variable W since each will be useful in different contexts during our exposition Next let T be a random variable whose possible values are the analyses t that we want to consider for word wi in its sentential context mathbf s w1 cdots wi cdots wmathbf s In this work we will focus on predicting the partofspeech tag of the itextth word wi We denote the set of values T can take as the set mathcal T Finally let R be a representationvalued random variable for the itextth word wi in a sentence derived from the entire sentence mathbf s We write mathbf rin mathbb Rd for a value of R While any given value mathbf r is a continuous vector there are only a countable number of values R can take To see this note there are only a countable number of sentences in mathcal V Next we assume there exists a true distribution pt mathbf s i over analyses t elements of mathcal T sentences mathbf s elements of mathcal V and positions i elements of mathbb N Note that the conditional distribution pt mid mathbf s i gives us the true distribution over analyses t for the itextth word in the sentence mathbf s We will augment this distribution such that p is additionally a distribution over mathbf r ie where we define the augmentation as a Diracs delta function Since contextual embeddings are a deterministic function of a sentence mathbf s the augmented distribution in eqtrue has no more randomness than the originalits entropy is the same We assume the values of the random variables defined above are distributed according to this unknown p While we do not have access to p we assume the data in our corpus were drawn according to it Note that Wthe random variable over possible wordsis distributed according to the marginal distribution where we define the deterministic distribution The task of supervised probing is an attempt to ascertain how much information a specific representation mathbf r tells us about the value of t This is naturally expressed as the mutual information a quantity from information theory where we define the entropy which is constant with respect to the representations as and where we define the conditional entropy as the pointwise conditional entropy inside the sum is defined as Again we will not know any of the distributions required to compute these quantities the distributions in the formulae are marginals and conditionals of the true distribution discussed in eqtrue The desired conditional entropy mathrm HT mid R is not readily available but with a model qtheta mathbf tmid mathbf r in hand we can upperbound it by measuring their empirical cross entropy where mathrm Hqtheta T mid R is the crossentropy we obtain by using qtheta to get this estimate Since the KL divergence is always positive we may lowerbound the desired mutual information This bound gets tighter the more similar in the sense of the KL divergence qtheta cdot mid mathbf r is to the true distribution pcdot mid mathbf r If we accept mutual information as a natural measure for how much representations encode a target linguistic task SECREF6 then the best estimate of that mutual information is the one where the probe qtheta t mid mathbf r is best at the target task In other words we want the best probe qtheta t mid mathbf r such that we get the tightest bound to the actual distribution ptmid mathbf r This paints the question posed by hewittliang2019designing who write when a probe achieves high accuracy on a linguistic task using a representation can we conclude that the representation encodes linguistic structure or has the probe just learned the task as a false dichotomy From an informationtheoretic view we will always prefer the probe that does better at the target task since there is no difference between learning a task and the representations encoding the linguistic structure To place the performance of a probe in perspective hewittliang2019designing develop the notion of a control task Inspired by this we develop an analogue we term control functions which are functions of the representationvalued random variable R Similar to hewittliang2019designings control tasks the goal of a control function mathbf ccdot is to place the mutual information mathrm IT R in the context of a baseline that the control function encodes Control functions have their root in the dataprocessing inequality BIBREF7 which states that for any function mathbf ccdot we have In other words information can only be lost by processing data A common adage associated with this inequality is garbage in garbage out We will focus on typelevel control functions in this paper these functions have the effect of decontextualizing the embeddings Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task To show that we may map from contextual embeddings to the identity of the word type we need the following assumption about the embeddings Assumption 1 Every contextualized embedding is unique ie for any pair of sentences mathbf s mathbf sprime in mathcal V we have mathbf sne mathbf sprime mid mid i ne j Rightarrow textsc bert mathbf si ne textsc bert mathbf sprime j for all i in lbrace 1 ldots mathbf srbrace and j in lbrace 1 ldots mathbf sprime rbrace We note that assone is mild Contextualized word embeddings map words in their context to mathbb Rd which is an uncountably infinite space However there are only a countable number of sentences which implies only a countable number of sequences of real vectors in mathbb Rd that a contextualized embedder may produce The event that any two embeddings would be the same across two distinct sentences is infinitesimally small assone yields the following corollary Corollary 1 There exists a function emph texttt id mathbb Rd rightarrow V that maps a contextualized embedding to its word type The function emph texttt id is not a bijection since multiple embeddings will map to the same type Using corone we can show that any noncontextualized word embedding will contain no more information than a contextualized word embedding More formally we do this by constructing a lookup function mathbf e V rightarrow mathbb Rd that maps a word to a word embedding This embedding may be onehot randomly generated ahead of time or the output of a datadriven embedding method eg fastText BIBREF15 We can then construct a control function as the composition of the lookup function mathbf e and the id function texttt id Using the dataprocessing inequality we can prove that in a wordlevel prediction task any noncontextual type level wordembedding will contain no more information than a contextualized token level one such as BERT and ELMo Specifically we have This result is intuitive and perhaps trivialcontext matters informationtheoretically However it gives us a principled foundation by which to measure the effectiveness of probes as we will show in secgain We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function mathbf ccdot We term how much more information the contextualized embeddings have about a task than a control variable the gain which we define as The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline encoded as the function mathbf c We will empirically estimate this value in secexperiments Interestingly enough the gain has a straightforward interpretation Proposition 1 The gain function is equal to the following conditional mutual information The jump from the first to the second equality follows since R encodes all the information about T provided by mathbf cR by construction propinterpretation gives us a clear understanding of the quantity we wish to estimate It is how much information about a task is encoded in the representations given some control knowledge If properly designed this control transformation will remove information from the probed representations The gain as defined in eqgain is intractable to compute In this section we derive a pair of variational bounds on mathcal GT R mathbf eone upper and one lower To approximate the gain we will simultaneously minimize an upper and a lowerbound on eqgain We begin by approximating the gain in the following manner these crossentropies can be empirically estimated We will assume access to a corpus lbrace ti mathbf rirbrace i1N that is humanannotated for the target linguistic property we further assume that these are samples ti mathbf ri sim pcdot cdot from the true distribution This yields a second approximation that is tractable This approximation is exact in the limit N rightarrow infty by the law of large numbers We note the approximation given in eqapprox may be either positive or negative and its estimation error follows from eqentestimate where we abuse the KL notation to simplify the equation This is an undesired behavior since we know the gain itself is nonnegative by the dataprocessing inequality but we have yet to devise a remedy We justify the approximation in eqapprox with a pair of variational bounds The following two corollaries are a result of thmvariationalbounds in appendixa Corollary 2 We have the following upperbound on the gain Corollary 3 We have the following lowerbound on the gain The conjunction of corupper and corlower suggest a simple procedure for finding a good approximation We choose qtheta 1cdot mid r and qtheta 2cdot mid r so as to minimize equpper and maximize eqlower respectively These distributions contain no overlapping parameters by construction so these two optimization routines may be performed independently We will optimize both with a gradientbased procedure discussed in secexperiments In seccontrolfunctions we developed an informationtheoretic framework for thinking about probing contextual word embeddings for linguistic structure However we now cast doubt on whether probing makes sense as a scientific endeavour We prove in seccontext that contextualized word embeddings by construction contain no more information about a wordlevel syntactic task than the original sentence itself Nevertheless we do find a meaningful scientific interpretation of control functions We expound upon this in seccontrolfunctionsmeaning arguing that control functions are useful not for understanding representations but rather for understanding the influence of sentential context on wordlevel syntactic tasks eg labeling words with their part of speech To start we note the following corollary Corollary 4 It directly follows from assone that textsc bert is a bijection between sentences mathbf s and sequences of embeddings langle mathbf r1 ldots mathbf rmathbf s rangle As textsc bert is a bijection it has an inverse which we will denote as textsc bert1 Theorem 1 The function textsc bert S cannot provide more information about T than the sentence S itself This implies mathrm IT S mathrm IT textsc bert S We remark this is not a BERTspecific resultit rests on the fact that the dataprocessing inequality is tight for bijections While thmbert is a straightforward application of the dataprocessing inequality it has deeper ramifications for probing It means that if we search for syntax in the contextualized word embeddings of a sentence we should not expect to find any more syntax than is present in the original sentence In a sense thmbert is a cynical statement the endeavour of finding syntax in contextualized embeddings sentences is nonsensical This is because under assone we know the answer a priorithe contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself Informationtheoretically the interpretation of control functions is also interesting As previously noted our interpretation of control functions in this work does not provide information about the representations themselves Actually the same reasoning used in corone could be used to devise a function texttt id smathbf r which led from a single representation back to the whole sentence For a typelevel control function mathbf c by the dataprocessing inequality we have that mathrm IT W ge mathrm IT mathbf cR Consequently we can get an upperbound on how much information we can get out of a decontextualized representation If we assume we have perfect probes then we get that the true gain function is mathrm IT S mathrm IT W mathrm IT S mid W This quantity is interpreted as the amount of knowledge we gain about the wordlevel task T by knowing S ie the sentence in addition to W ie the word Therefore a perfect probe would provide insights about language and not about the actual representations which are no more than a means to an end We do acknowledge another interpretation of the work of hewittliang2019designing inter alia BERT makes the syntactic information present in an ordered sequence of words more easily extractable However ease of extraction is not a trivial notion to formalize and indeed we know of no attempt to do so it is certainly more complex to determine than the number of layers in a multilayer perceptron MLP Indeed a MLP with a single hidden layer can represent any function over the unit cube with the caveat that we may need a very large number of hidden units BIBREF16 Although for perfect probes the above results should hold in practice texttt id cdot and mathbf ccdot may be hard to approximate Furthermore if these functions were to be learned they might require an unreasonably large dataset A random embedding control function for example would require an infinitely large dataset to be learnedor at least one that contained all words in the vocabulary V Better representations should make their respective probes more easily learnableand consequently their encoded information more accessible We suggest that future work on probing should focus on operationalizing ease of extraction more rigorouslyeven though we do not attempt this ourselves The advantage of simple probes is that they may reveal something about the structure of the encoded informationie is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings We suspect that many researchers who are interested in less complex probes have implicitly had this in mind While this paper builds on the work of hewittliang2019designing and we agree with them that we should have control tasks when probing for linguistic properties we disagree with parts of the methodology for the control task construction We present these disagreements here hewittliang2019designing introduce control tasks to evaluate the effectiveness of probes We draw inspiration from this technique as evidenced by our introduction of control functions However we take issue with the suggestion that controls should have structure and randomness to use the terminology from hewittliang2019designing They define structure as the output for a word token is a deterministic function of the word type This means that they are stripping the language of ambiguity with respect to the target task In the case of partofspeech labeling love would either be a noun or a verb in a control task never both this is a problem The second feature of control tasks is randomness ie the output for each word type is sampled independently at random In conjunction structure and randomness may yield a relatively trivial task that does not look at all like natural language What is more there is a closedform solution for an optimal retrievalbased probe that has zero parameters If a word type appears in the training set return the label with which it was annotated there otherwise return the most frequently occurring label across all words in the training set This probe will achieve an accuracy that is 1 minus the outofvocabulary rate the number of tokens in the test set that correspond to novel types divided by the number of tokens times the percentage of tags in the test set that do not correspond to the most frequent tag the error rate of the guessthemostfrequenttag classifier In short the best model for a control task is a pure memorizer that guesses the most frequent tag for outofvocabulary words hewittliang2019designing propose that probes should be optimised to maximise accuracy and selectivity Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture Given their characterization of control tasks maximising selectivity leads to a selection of a model that is bad at memorization But why should we punish memorization Much of linguistic competence is about generalization however memorization also plays a key role BIBREF17 BIBREF18 BIBREF19 with word learning BIBREF20 being an obvious example Indeed maximizing selectivity as a criterion for creating probes seems to artificially disfavor this property hewittliang2019designing acknowledge that for the more complex task of dependency edge prediction a MLP probe is more accurate and therefore preferable despite its low selectivity However they offer two counterexamples where the less selective neural probe exhibits drawbacks when compared to its more selective linear counterpart We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection First 36hewittliang2019designing point out that in their experiments the MLP1 model frequently mislabels the word with suffix s as NNPS on the POS labeling task They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned Our analysis leads us to believe that on contrary this shows that one should be using the best possible probe to minimize the chance of misrepresentation Since more complex probes achieve higher accuracy on the task as evidence by the findings of hewittliang2019designing we believe that the overall trend of misrepresentation is higher for the probes with higher selectivity The same applies for the second example discussed in section 42hewittliang2019designing where a less selective probe appears to be less faithful The authors show that the representations on ELMos second layer fail to outperform its word type ones layer zero on the POS labeling task when using the MLP1 probe While they argue this is evidence for selectivity being a useful metric in choosing appropriate probes we argue that this demonstrates yet again that one needs to use a more complex probe to minimize the chances of misrepresenting what the model has learned The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo not that it is not present see seceaseextract We consider the task of POS labeling and use the universal POS tag information BIBREF21 from the Universal Dependencies 24 BIBREF22 We probe the multilingual release of BERT on six typologically diverse languages Basque Czech English Finnish Tamil and Turkish and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word as tokenized in the treebank We will consider three different control functions Each is defined as the composition mathbf c mathbf ecirc texttt id with a different lookup function These lookup functions are mathbf etextit fastText returns a language specific fastText embedding BIBREF15 mathbf etextit onehot returns a onehot embedding mathbf etextit random returns a fixed random embedding All of these functions are type level in that they remove the influence of the context on the word As expounded upon above our purpose is to achieve the best bound on mutual information we can To this end we employ a deep MLP as our probe We define the probe as an mlayer neural network with the nonlinearity sigma cdot mathrm ReLUcdot The initial projection matrix is W1 in mathbb Rr1 times d and the final projection matrix is Wm in mathbb Rmathcal T times rm1 where rifracr2i1 The remaining matrices are Wi in mathbb Rri times ri1 so we half the number of hidden states in each layer We optimize over the hyperparametersnumber of layers hidden size onehot embedding size and dropoutby using random search For each estimate we train 50 models and choose the one with the best validation crossentropy The crossentropy in the test set is then used as our entropy estimate We know textsc bert can generate text in many languages here we assess how much does it actually know about syntax in those languages And how much more does it know than simple typelevel baselines tabresultsfull presents this results showing how much information textsc bert fastText and onehot embeddings encode about POS tagging We see thatin all analysed languagestype level embeddings can already capture most of the uncertainty in POS tagging We also see that BERT only shares a small amount of extra information with the task having small or even negative gains in all languages textsc bert presents negative gains in some of the analysed languages Although this may seem to contradict the information processing inequality it is actually caused by the difficulty of approximating texttt id and mathbf ccdot with a finite training setcausing mathrm KLqtheta 1T mid R to be larger than mathrm KLqtheta 2T mid mathbf cR We believe this highlights the need to formalize ease of extraction as discussed in seceaseextract Finally when put into perspective multilingual textsc bert s representations do not seem to encode much more information about syntax than a trivial baseline textsc bert only improves upon fastText in three of the six analysed languagesand even in those it encodes at most in English 5 additional information We proposed an informationtheoretic formulation of probing we define probing as the task of estimating conditional mutual information We introduce control functions which allows us to put the amount of information encoded in contextual representations in the context of knowledge judged to be trivial We further explored this formalization and showed that given perfect probes probing can only yield insights into the language itself and tells us nothing about the representations under investigation Keeping this in mind we suggested a change of focusinstead of focusing on probe size or information we should look at ease of extraction going forward On another note we apply our formalization to evaluate multilingual textsc bert s syntax knowledge on a set of six typologically diverse languages Although it does encode a large amount of information about syntax more than 81 in all languages it only encodes at most 5 more information than some trivial baseline knowledge a typelevel representation This indicates that the task of POS labeling wordlevel POS tagging is not an ideal task for contemplating the syntactic understanding of contextual word embeddings Theorem 2 The estimation error between mathcal Gqtheta T R mathbf e and the true gain can be upper and lowerbounded by two distinct KullbackLeibler divergences We first find the error given by our estimate Making use of this error we trivially find an upperbound on the estimation error as which follows since KL divergences are never negative Analogously we find a lowerbound as In this section we present accuracies for the models trained using textsc bert fastText and onehot embeddings and the full results on random embeddings tabresultsextra shows that both BERT and fastText present high accuracies in all languages except Tamil Onehot and random results are considerably worse as expected since they could not do more than take random guesses eg guessing the most frequent label in the training test in any word which was not seen during training </s>
<s>[INST] Detecting and Extracting Events from Text Documents [/INST] Among the several senses that The Oxford English Dictionary the most venerable dictionary of English provides for the word event are the following Although an event may refer to anything that happens we are usually interested in occurrences that are of some importance We want to extract such events from textual documents In order to extract important events or events of a specific type it is likely that we have to identify all events in a document to start with Consider the first paragraphs of the article on the Battle of Fredericksburg in the English Wikipedia accessed on May 5 2012 We have highlighted the events in the paragraph The Battle of Fredericksburg was fought December 1115 1862 in and around Fredericksburg Virginia between General Robert E Lees Confederate Army of Northern Virginia and the Union Army of the Potomac commanded by Maj Gen Ambrose E Burnside The Union armys futile frontal assaults on December 13 against entrenched Confederate defenders on the heights behind the city is remembered as one of the most onesided battles of the American Civil War with Union casualties more than twice as heavy as those suffered by the Confederates The paragraph contains two fairly long sentences with several events mentioned using the following words fought commanded assaults entrenched remembered casualties and suffered Some of these events are described in terms of verbs whereas the others are in terms of nouns Here fought commanded assaults battles definitely seem to be events that have durations or are durative Entrenched seems to talk about a state whereas it is possible that suffered talks about something punctual ie takes a moment or point of time or can be durative ie takes a longer period of time as well The act of remembering by an individual is usually considered to happen momentarily ie forgotten things come back to mind at an instant of time But in this paragraph it is given in passive voice and hence it is unclear who the actor is possibly a lot different people at different points of time Thus depending on who is asked the events picked out may be slightly different but the essence is that there are several events mentioned in the paragraph and the objective in event extraction is to extract as many of them as possible in an automated fashion For example someone may not pick out remembered as an event that took place Some others may not want to say that entrenched is an event In addition if one is asked to pick an important event responses may vary from person to person Finally if one is asked to summarize the paragraph depending on the person asked the summary may vary A summary prepared by the author of this article is given below The Battle of Fredericksburg fought December 1112 1862 was one of the most onesided battles of the American Civil War with heavy Union casualties Obviously there are many other possibilities for summarization However the idea is that identification of events and their participants may play a significant role in summarizing a document This paper discusses the extraction of events and their attributes from unstructured English text It is an survey of research in extracting event descriptions from textual documents In addition we discuss how the idea of event extraction can be used in application domains such as summarization of a document We also discuss application of event extraction in the biomedical domain and in the context of Twitter messages The rest of the paper is organized in the following manner Section Events in Linguistics and Philosophy provides a description of research in linguistics and philosophy The author believes that such a background at least at a basic level is necessary to understand and develop the approaches and algorithms for automatic computational detection and extraction of events and their participants from textual documents Section Extracting Events from Textual Documents discusses approaches used in extracting events from textual documents Most approaches these days use machine learning techniques Real world events are things that take place or happen In this section we present an overview of how real events are represented in terms of language In particular we discuss classification of events and features necessary for such classification We follow this by presenting the preferred way among philosophers to represent events in terms of logic We bring this section to an end by presenting some of the structures ascribed to events by linguists or philosophers working at an abstract level The reason for the inclusion of this section in the paper is to set the context for the discussions in the following sections on the practical task of extracting events Practical systems do not usually follow linguistic niceties although they draw inspiration from linguistics or philosophy There have been many attempts at classifying linguistic events Below we briefly discuss a few The primary focus when linguists discuss events is on the verb present in a sentence Nouns adjectives and other elements present in a sentence provide arguments for the verb Aristotle as presented in BIBREF0 classified verbs that denote something happening into three classes actuality movement and action An actuality represents the existence of a thing or things this is called state by others eg BIBREF1 An examples of actuality can be seen in the sentence Jon is ill A movement is an incomplete process or something that takes time but doesnt have an inherent end An example of movement is seen in the sentence Jon is running An action is something that takes time and has an inherent end An example of an action is present in the sentence Jon is building a house In other words Aristotle distinguished between states and events and then events BIBREF2 lists verbs that belong to the three Aristotelian classes and develops membership criteria for the classes Kenny renamed the classes as states activities actions without inherent end and performances actions with inherent ends Kennys membership criteria are based on semantic entailments about whether the event can be considered to have taken place when it is still in progress For example during any point when we say Jon is running we can consider that the activity of running has taken place In other words Jon is running entails Jon has run Thus run is an activity In contrast when we say Joh is taking the final we cannot say that Jon has taken the final In other words the first does not entail the second Thus the main difference between an activity and a performance is what is called delimitation A delimited event has a natural end BIBREF3 developed a 4way classification scheme for linguistic events and BIBREF4 developed a set of criteria for membership in the classes The classes enumerated by Dowty are states activities achievements and accomplishments The definitions are given below Activities Events that take place over a certain period of time but do not necessarily have a fixed termination point Examples Jon walked for an hour and Jon is driving the car Accomplishments Events that happen over a certain period of time and then end Examples Jon built a house in a month and Jon is taking the final Achievements These are events that occur instantaneously and lack continuous tenses Examples Jon finished the final in 45 minutes and The vase broke States These are nonactions that hold for a certain period of time but lack continuous tenses Examples Jon knows the answer and Jon likes Mary BIBREF5 adopts the same classification as Vendler and Dowty but divides achievements into two classes The first one is still called achievements but the second one is called semelfactives In this new scheme achievements are instantaneous that is the beginning of the event is the same as its end culminating events but semelfactives are events with no duration that result in no change of state An example of a semelfactive is Jon knocked on the door Table 1 presents the nomenclatures introduced by various linguists in one place There are many variations of the schemes given here although we do not discuss them in this paper In the early work on event classification Aristotle Vendler and others assume that what needs to be classified is the verb However many have concluded that it is impossible to classify a verb into a specific class It is more appropriate to say that a clause containing an event has a class and the classification of such a clause depends not only upon the verb but also on other material present in the clause BIBREF6 BIBREF4 BIBREF7 BIBREF8 In other words the classification must be compositional or must depend on various features of the clause not exclusively verbbased There is also substantial evidence that sentence material other than the verb can change the overall event type For example addition of a direct object can change an activity to an accomplishment BIBREF1 as in the following examples Bill ran for five minutesin five minutes activity Bill ran the mile for 5 minutesin 5 minutes accomplishment Many authors in linguistics have delved deeper into the nature of event classes and have tried to come up with features or characteristics that can be used to identify whether something verb or a clause belongs to a particular event class or not These features or characteristics are necessary to describe the structure of events in a theoretical sense Description of event structure usually refers to the actual words used lexical features or characteristics and also the structure of clause or sentence syntactic features or characteristics Identification of such features may be described as finding parameters of event types or parameterization of event types A lot of the work on parameterization of event typesclasses use the classes espoused by Vendler These include BIBREF9 BIBREF10 BIBREF11 BIBREF12 BIBREF13 BIBREF14 BIBREF15 and others We will only briefly touch upon such work in this paper Our objective is to impress upon the reader that identification of features of event classes is considered an important task by linguists For example BIBREF9 describes Vendlers classes with two binary features or parameters continuousness whether an event has duration and boundedness whether an event has a natural terminal point or endpoint Using these two features the four Vendler classes can be parameterized as follows bounded continuous bounded continuous bounded continuous bounded continuous BIBREF12 BIBREF13 introduce the notion of countability while discussing event classes This is similar to the masscount opposition in nouns Terminating events can be counted but nonterminating processes cannot Hoeksema introduces two binary features count and duration to obtain Vendlers classes as seen below The feature duration refers to whether the event takes place over time count duration count duration count duration count duration BIBREF11 refines Vendlers classes by adding a class much like Smiths semelfactives BIBREF5 He suggests that in addition to states there are four event types culmination culminated process point and process He uses two binary features or parameters consequence identifying termination or culmination and atomic or nonatomic which Moens called extended Atomic is also called momentous or pointed Moens classification is given below along with the features and examples consequence atomic examples recognize win the race consequence atomic examples build a house consequence atomic example hiccup tap wink consequence atomic example run swim play the piano examples understand love resemble Moens also claims that culminated process is an event class whose members are made up of smaller atomic units In particular a culminated process is a process with a consequent state This insight that events can be decomposed into subevents was used later by others working on the lexical analysis of events eg BIBREF16 BIBREF17 Others such as BIBREF18 BIBREF19 BIBREF20 have claimed that arguments of verbs are related to subevents We summarize the various features that linguists have used to classify events in Table 2 Of course we do not discuss many other proposals for features in this brief discussion Classification of events and their parameterization of verbs or predicates or clauses are only the first steps in developing a deeper linguistic understanding of events In particular in order to understand the linguistic representation of events linguists need to go beyond classification schemes Mathematical logic is used to represent the semantics of language In particular we use logic to represent the meaning of single sentences Early work on events eg Panini as discussed by BIBREF21 and BIBREF22 stated that language encodes two kinds of informationactions and nonactions Verbs represent actions and nouns represent nonactions or things BIBREF23 proposes that one needs an event variable e to represent events in mathematical logic This variable e is used to represent relations represented by the event denoted by the verb and other constituents in the sentence such as modifiers Davidson claims that logically speaking events are like things in that they can be represented by a variable and this variable can be modified and quantified A question that arises is how many arguments should an event predicate in logic take BIBREF2 Just like nominal modifiers modify nouns event modifiers can modify event predicates An event predicate can take any number of modifiers just like noun nominal modifiers Examples of event modifiers are time place manner and instrument Davidson proposed that an event predicate may take one or more required arguments is this true and any number of adjuncts or optional modifiers Consider the following examples from BIBREF23 The English sentence and the corresponding logical representation or logical form is given for each example John buttered the toast exists e buttered Jones thetoast e John buttered the toast slowly exists e buttered Jones thetoast e wedge slowly e John buttered the toast slowly in the bathroom exists e buttered Jones thetoast e wedge slowly e wedge inthebathroom e John buttered the toast slowly in the bathroom with a knife exists e buttered Jones thetoast e wedge slowly e wedge inthebathroom e wedge withaknife e John buttered the toast slowly in the bathroom with a knife at midnight exists e buttered Jones thetoast e wedge slowly e wedge inthebathroom e wedge withaknife e wedge atmidnight e Thus we can see that Davidsons approach places the event variable e in the main predicate of a clause and distributes it among the modifiers of the clause in logical representation In writing the meaning in Davidsonian logic the author creates predicates such as thetoast and inthebathroom just for illustration without going into details Davidsonian representation allows events to be represented in logic logical semantics without requiring verbs to have multiple arities ie without taking different arguments in different situations Because the event is represented as a variable the event variable e can be included in the representation of logical meaning of each modifier or adjunct Another benefit is that using Davidsons representation one can analyze events represented syntactically as nouns nominals or verbs BIBREF21 For example one can refer to an event using the verb to burn or the noun a burn Parsons also observes that using a variable to represent an event allows quantification over events the same way quantification applies to things The following examples are from BIBREF21 In every burning oxygen is consumed forall e burning e rightarrow exists eprime consuming eprime wedge object e oxygen wedge in eeprime Agatha burned the wood exists e burning e wedge subject e Agatha wedge object e wood Oxygen was consumed exists eprime consuming eprime wedge object eprime oxygen We do not go into details of containment of events as expressed by in in the first example above and also the representation of passives as in the third example above In these three examples the author uses predicates such as object and subject which represent more finegrained relationship with the main predicate corresponding to the verb usually than the examples earlier Extending this work BIBREF21 BIBREF24 BIBREF25 have demonstrated that using Davidsons e variable allows one to express tense dependency between perception verbs and their infinitival compliments in a natural way BIBREF21 extends Davidsons approach to logical representation by adding an extra term corresponding to the event type of the predicate He distinguishes between two types of eventualities eventualities that culminate called Cul containing achievements and accomplishments and those that do not called Hold containing states and activities John buttered the toast exists e buttering e wedge agent e Jones wedge theme e toast wedge exists t t now wedge Cul et Mary knows Fred exists e knowing e wedge experiencer e Mary wedge theme e Fred wedge Hold e now In the logical representation in these examples the author uses predicates such as theme agent and experiencer which are usually are called cases in linguistics BIBREF26 In addition the author uses a variable t to express time now is a special indexical variable We do not give detailed discussions of these fine points here BIBREF27 also proposes a logical form based on Davidsons approach The main motivation behind Hobbs approach is to be able to produce satisfactory semantic representation when an event is expressed as a noun or when we want to express the meaning of tenses modalities and adverbial modifiers He also explains how socalled opaque adverbials like almost in the sentence John is almost a man can be represented by the Davidsonian approach which Hobbs extends He also shows how the ambiguity between de re and de dicto meanings of sentences BIBREF28 that discuss beliefs can be explained by his approach to logical form representation of sentences The representation by Hobbs is quite similar to other such representations based on Davidson although there are some fine points of differences that we do not discuss here From a practical point of view several research efforts in computational linguistics have adopted Hobbs logical form and one such recent approach is by BIBREF29 who attempt to map Stanford dependency parses BIBREF30 into Hobbsian logical form and discover that sometimes it is possible to do so but in other cases the mapping requires semantic information that is not present in such dependencies indentified by the Stanford parser Early efforts at identification of event structure in linguistics was usually limited to explaining essential grammatical phenomena However others later proposed complex structures that go beyond simple structures such as Davidsons approach of representing an event by a single logical variable and its components by additional predicates Understanding the structure of an event entails i understanding the argument structure of the word or phrase used to express the event in surface form ii understanding the components in the conceptual or semantic description of an event and iii understanding the relation or mapping between syntactic realization of an event and its conceptual components In fact analysis of argument structure includes all three steps and requires finding the relation between meaning of a verb or a clause and the syntactic realization of arguments BIBREF19 BIBREF31 introduce argument structure as a distinct level of representation in linguistics Other prominent representations proposed include fstructures BIBREF32 linear event structures BIBREF18 lexical conceptual structures LCS BIBREF33 BIBREF34 and two related structures event structures and qualia structures for arguments BIBREF17 There are two sides to event structure syntactic and semantic When specifying event structure at the semantic level the description must be richer than semantic role descriptions BIBREF35 BIBREF36 BIBREF37 argues that named roles or thematic roles are too coarsegrained to provide useful semantic interpretation of a sentence It is also necessary to capture semantic distinctions in a much more finegrained manner compared to prior theories of BIBREF38 BIBREF39 BIBREF40 A sentence or two on these theories By this time it was clear that sophisticated approaches to specifying event structure must build upon the rich taxonomy of verb classes BIBREF37 and descriptive vocabulary work BIBREF41 and BIBREF33 BIBREF41 discusses the systematic relations in language between meaning and surface expression In particular Talmy focuses on verbs and in particular verbs that describe motion or location He sketches a motion event in order to explore issues in lexicalization The basic motion event consists of one object called figure moving or located with respect to another object called the referent or the ground The motion event has additional components such as path and motion manner and cause Talmy gives examples of cases where the verb at once can express in addition to the action or motion one or more of figure path manner or cause If a sematic component such as manner or cause is expressed directly by the verb it is called conflation of manner or cause into the verb Some verbs incorporate aspect which represents the pattern of distribution of at ion though time In some languages verbs can incorporate personation as well Personation is a specification of the person involved eg self or nonself Some verbs incorporate what is called valence where in conceptualizing an event that involves several different entities in distinct roles a verb is able to direct greater attention to some one of these entities that to the others or perhaps adopt a specific perspective Sometimes semantic components are not incorporated into the verb but are expressed through what Talmy calls satellites A satellite is an immediate constituent of the verb root other than inflections auxiliaries or nominal arguments Talmy enumerates 35 different semantic components In addition to the six listed above these include main purpose result polarity aspect personation temporal and spatial setting gender valence etc Talmy also isolates surface elements within a verb complex such as the root verb inflections appositions subordinate clauses and satellites He then examines which semantic elements are expressed by which surface elements He finds that the relationship is mostly not onetoone A combination of semantic elements may be expressed by a single surface element or a single semantic element by a combination of surface elements In a similar manner semantic elements of different types can be expressed by the same type of surface elements or by several different ones Talmys work does not enumerate lexical entries for specific verbs but provides detailed discussion on semantic facets of meanings of a verb The main thrust of Talmys work is to demonstrate that semantic elements and surface elements relate to each other in specific patterns both typological and universal In work prior to Talmys most work has treated languages lexical elements as atomic givens without involving semantic components that comprise them These studies treated the properties that such whole forms can manifest in particular word order grammatical relations and case roles Talmys crosslinguistic study determines semantic components surface presence site their host constituent or grammatical relation and combination within a site In addition Talmys tracing of surface occurrence patterns extends beyond treating single semantic component at a time to treating a concurrent set of components Lexical semantics must strive to represent at least some of the various semantic components that Talmy enumerates In addition it must incorporate ways of mapping from syntax to semantics or vice versa In a very simple system a set of detailed rules may be able to enumerate the mappings from syntax to semantics and vice versa In a complex modern system it is necessary that a machine learning technique will automatically acquire the mappings This usually requires a lot of labeled examples for a machine learning program to learn such mappings We discuss some such as efforts later in the paper Consider the following illustrative examples Mary walked Mary walked to the store Mary walked for 30 minutes Sentence a describes a process which is an activity of of indefinite length ie the sentence does not say how long the activity of walking took Although b does not give an explicit time duration for the walking event it depicts an accomplishment and provides a logical culmination to the duration of the event of walking because the event is over when Mary reached the store Sentence c talks about a bounded process in which the event of walking terminating although it does not provide an explicit termination point but provides a bound to the time extent of the activity in terms of a duration adverbial This example motivates the observation that the use of prepositional phrases or duration adverbials can change the aspectual classification of an event To explain such phenomena better it is beneficial to have more complex event structures or lexical analysis of event words BIBREF18 hypothesizes that the direct object plays a role in delimitation of an event ie whether it has a culmination or not BIBREF16 BIBREF17 builds upon such observations and hypothesizes that it is necessary to know the how an event can be broken down into subevents He provides the following reasons for subeventual analysis Subeventual analysis of predicates allows verbal decomposition leading to more complex lexical semantics Scope of adverbial modification for some adverbials can be explained better using event substructures Semantic arguments of items within a complex event structure can be mapped onto argument structures better Pustejovsky describes a generative lexicon in the sense that meanings are described in terms of a limited number of socalled generative devices or primitives by drawing upon Aristotles species of opposition BIBREF42 For example to express the meaning of the word closed as in The door is closed The door closed or John closed the door one needs the concept of opposition between closed and notclosed This essential opposition in the meaning of a lexical item is described by Pustejovsky in terms of what is called the qualia structure of the lexical item Thus there are three primary components to the event structure proposed by Pustejovsky Event type The event type of the lexical item is given in terms of the classification schemes discussed earlier Rules for event composition Since an event may be expressed by more than a single verb the meanings of several lexical items may have to be composed to obtain a description For example how does PP attachment change the meaning of the central event in context Mapping rules from event structure to argument structure Pustejovsky describes a number of rules or principles for such mapping These rules describe how how semantic participants are realized syntactically Pustejovsky provides lexical meaning in terms of four separate structures Argument structure The behavior of a word as a function with its arity This provides the predicate argument structure for a word which specifies how it maps to syntax Event structure It identifies a specific event type for a word or a phrase following BIBREF3 Qualia structure It provides the essential attributes of an object that need to be expressed lexically Inheritance structure It specifies how the word is globally related to other concepts in the lexicon In summary Pustejovsky endows complexity to lexical entries for verbs as well as nonverbs so that semantic weight does not fall on verbs alone in the lexicon and when composing the meaning of a sentence from its constituents Pustejovskys approach also reduces the number of lexical entries necessary for individual verbs because the lexical entries become more general Pustejovosky focuses on creating more muscular compositional semantics rather than decomposing a verbs meaning into a specified number of primitives Frequently specific semantic arguments of a verb also called thematic arguments appear in characteristic syntactic positions This has led to theories or proposals regarding mapping between the two These theories state that specific semantic arguments belong in specific syntactic positions and that there is 11 relationship between semantic argument and initial syntactic position Such proposals or theories include the Universal Alignment Hypothesis BIBREF43 and Uniformity of Theta Assignment Hypothesis BIBREF44 These are supposed to be universal in that they applied across languages and across verbs For example agents appear in subject positions across languages and verbs This mapping is thus universal However other mappings are not so universal For example the theme can appear in object subject or indirect object position and the experiencer can appear in subject or object position A theory that explains lexicontosyntax mapping also needs to explain the existence of argument alterations In other words it should explain the possibility that the same semantic role can appear in different syntactic positions for the same verb Usually linguists classify verbs into a number of semantic classes different from the ones we talked about earlier and for each class a set of mapping relations and a set of argument alterations are specified BIBREF45 BIBREF46 BIBREF47 However other researchers claim that such semantic classification is difficult to obtain because semantically similar verbs may behave differently across languages BIBREF48 a given verb in a language may have multiple syntactic realizations BIBREF48 BIBREF6 and semantically similar verbs may allow several syntactic realizations BIBREF6 The discussions on lexical representation of verbs so far have been based on efforts where a small number of examples were studied intently by linguists before making the various proposals Starting the 1980s but more so in the 1990s when computer scientists started to focus more on analysis of large text corpora it became evident to some that the lexical analysis of pure linguists can be extended by knowledge gathered from such corpora This led to development of the Comlex lexicon BIBREF49 WordNet BIBREF50 BIBREF51 VerbNet BIBREF52 FrameNet BIBREF53 BIBREF54 BIBREF55 BIBREF56 and other resources Some of these may have started without an automatic analysis of corpora but soon corpora were used to refine and enhance the initial lexical resources Comlex was a substantial resource whose creators spent a lot of effort in enumerating subcategorization features WordNet is a large lexical resource or ontology which encompasses words from all categories WordNet includes verbs but is not verbspecific VerbNet of course is focussed on verbs alone FrameNet is also focussed on verbs Both VerbNet and FrameNet attempt to represent all verbs not only those which are used to represent events However the term event itself is not clearly defined and most anything that is described by a verb can be considered an event in some context or another Comlex was created at New York University as a computational lexicon providing detailed syntactic information on approximately 38000 words BIBREF49 Of course not all of these were verbs or words that describe actions The feature set Comlex provided were more detailed than commerically available dictionaries at the time such as the Oxford Advanced Learners Dictionary OALD BIBREF57 and Longmans Dictionary of Contemporary Englisch LDOCE BIBREF58 The initial word list was derived from OALD The lexicon used a Lisplike notation for dictionary entries We see some sample entries for verbs in Comlex in Figure 1 Comlex paid particular attention to providing detailed subcategorization or complement information for verbs and nouns and adjectives that take complements Comlex was influenced by prior work on lexicon such as the Brandeis Verb Lexicon BIBREF59 the ACUILEX project BIBREF60 the NYU Lingustic String Project BIBREF61 the OALD and the LDOCE and it incorporated distinctions made in these dictionaries Comlex had 92 different subcategorization features for verbs The features recorded differences in grammatical functions as well as constituent structure In particular Comlex captured four different types of control subject control object control variable control and arbitrary control It was also able to express the fact that a verb may have different control features for different complement structures or different prepositions within the complement Figure 2 shows a few complements used in Comlex Here cs is the constituent structure gs is the grammatical structure and ex are examples The authors created a initial lexicon manually and then refined it using a variety of sources both commercial and corpusbased The Nomlex dictionary of nominalizations was also developed at NYU BIBREF62 BIBREF63 It enumerated allowed complements for nominalizations and also related nominal complements of the corresponding verbs A nominalization is the noun form of a verb For example the verb promote is nominalized as nominalization Similarly the nominalizations of the verb appoint are appointment and appointee Nomlex entries are similar in syntax to Comlex entries Each Nomlex entry has a nomtype feature which specifies four types of nominalizations action appointment destruction or state knowledge subject teacher object appointee and verbpart for those nominalizations that incorporate a verbal particle takeover Meyers et al BIBREF63 presented a procedure what mapped syntactic and semantic information for an active clause containing a verb eg IBM appointed Alice Smith as vice president into a set of patterns for nominalization IBMs appointment of Alice Smith as vice president or Alice Smiths appointment as vice president The lexical entry for the verb appoint used in Comlex is given in Figure 1 The lexical entry in Nomlex for the action nominalization appointment is given in Figure 3 Levins verb classes BIBREF45 explicitly provide the syntax for each class but do not provide semantic components The classes are based on the ability or inability of a verb to occur in pairs of syntactic frames with the assumption that syntactic frames reflect the underlying semantics For example break verbs and cut verbs are similar because they can all take part in transitive and middle constructions However only break verbs can occur in simple intransitive constructs Similarly cut verbs can occur in conative constructs and break verbs cannot The explanation given is that cut describes a sequence of actions that result in the goal of separating an object into pieces It is possible that one can perform the actions without achieving the result eg John cut at the loaf For break the result is a changed state where the object becomes separated into pieces If the result is not achieved we cannot say that the action of breaking took place The examples below are taken from BIBREF64 Transitive Construction a John broke the window b John cut the bread Middle Construction a Glass breaks easily b This loaf cuts easily Intransitive Construction a The window broke b The bread cut Conative Construction a John broke at the window b John valiantly cut at the frozen loaf but his knife was too dull to make a dent in it Levins original classes had some inconsistencies For example many verbs were listed in multiple classes some of which had conflicting syntactic frames BIBREF65 refined the original classification to remove some of these problems to build a more finegrained syntactically and semantically coherent refined class called intersective Levin classes Levins classes also are focussed mostly on verbs taking noun NP and prepositional phrase PP complements and are weak on coverage of ADJP ADVP sentential complement etc VerbNet is built using these classes Organization of verbs into such classes capture generalizations about their properties Such classes also help create better NLP systems Many NLP systems benefit from using the mapping from surface realization of arguments to predicateargument structure that is available in such classes These classes also capture abstractions eg syntactic and semantic properties and as a result they are helpful in many operational contexts where the available corpora are small in size and thus it is not possible to extract detailed lexical information about verbs from such small corpora The predictive power of the classes can compensate for the lack of sufficient data Lexical classes have been helpful in tasks such as subcategorization acquisition BIBREF66 BIBREF67 BIBREF68 automatic verb acquisition BIBREF69 semantic role labeling BIBREF70 and word sense disambiguation BIBREF71 Add newer citations for application Look at after 2004 proceedings of NAACLHLT The WordNet project BIBREF50 BIBREF51 started in the mid1980s at Princeton University and over time has become the most widely used lexical resource in English especially when one needs a lexical resource that can be used by a program Wordnet was primarily designed as a semantic network and later modified to be a lexical database WordNet groups words into synsets synonym set and contains relations among these synsets A synset contains all the word forms that can refer to a given concept or sense For each sense of each word WordNet also provides a short general definition called its gloss and example usages As the name hints the WordNet can be thought of as a large graph where the words and synsets are nodes These nodes linked by edges that represent lexical and semanticconceptual links which we discuss briefly below Individual words may also be linked with antonym links Superclasssubclass relations link entire synsets WordNet has entries for verbs nouns adjectives and adverbs To get a better feel for what WordNet is like let us look at the online version of WordNet at Priceton University When we search for the word assault in the online WordNet the results come in two parts noun and verb because assault can be either a verb or a noun The results that show up for verb are given in Figure 4 The verb senses of assault belongs to three synsets In other words it has three senses or can refer to three different concepts Each sunset is composed of several verbs The second of these synsets contains one sense of each of the verbs assail set on and attack A verb may have four types of entries in WordNet hypernyms toponyms entailment and coordinate terms These terms are defined here A verb Y is a hypernym of the verb X if the activity X is a kind of Y For example to perceive is an hypernym of to listen A verb Y is a troponym of the verb X if the activity Y is doing X in some manner For example to lisp is a troponym of to talk A verb Y is entailed by X if by doing X0 one must be doing X1 For example to sleep is entailed by to snore Coordinate terms are those verbs that share a common hypernym eg to lisp and to yell If we want to see the direct troponym of the second synset for the verb meaning of assault we get what we see in Figure 5 WordNet has been used in many applications However it is most commonly used as a computational lexicon or ontology of English or another language for word sense disambiguation a task that assigns the most appropriate senses ie synsets to words in specific contexts Although WordNet is large and detailed WordNet does not have information required by NLP applications such as predicateargument structure Although WordNet contains a sufficiently wide range of common words it does not cover special domain vocabulary It is general in nature and therefore difficult to use if specialized vocabulary is needed Also WordNet senses are sometimes overly finegrained even for human beings and as a results some researcher argue that it cannot achieve very high performance in the tasks where it is applied Although WordNet is the most widely used online lexical database in NLP applications is also limited in its coverage of verbs The English WordNet currently contains approximately 117659 synsets each sunset corresponding to a sense of a word It has 11529 verbs that belong to 13767 synsets It also contains 117798 nouns that belong to 82115 synsets WordNets have been developed or are being developed in a large number of languages such as Catalan French Spanish Japanese Chinese Danish Korean and Russian Notable collaborative efforts include Euro Wordnet BIBREF72 BIBREF73 BIBREF74 Asian Wordnet BIBREF75 BIBREF76 BIBREF77 and Indo WordNet BIBREF78 projects The Indo WordNet focuses on 18 major languages of India For example as of June 2012 there are 15000 synsets in the Assamese WordNet 24000 in Bengali 16000 in Bodo 27000 in Gujarati and 31500 in Oriya WordNets in most other languages are not as sophisticated as the one in English FrameNet BIBREF79 BIBREF55 BIBREF80 is another substantial publicly available lexical resource that has come into existence independently It is based on the theory of frame semantics BIBREF81 BIBREF82 BIBREF53 BIBREF54 BIBREF56 where a frame corresponds to a stereotypical scenario involving an interaction and participants where participants play some kind of roles The idea is that the meanings of most words are best understood in context FrameNet proposes a small context called a semantic frame a description of a type of event relation or entity and the participants in it A frame has a name and this name is used to identify a semantic relation that groups together the semantic roles Although frames mostly correspond to verbs there are frames that can be identified by nouns and adjectives FrameNet also has a large number of annotated sentences Each annotated sentence exemplifies a possible syntactic realization of the semantic role associated with a frame for a given target word FrameNet extracts syntactic features and corresponding semantic roles from all annotated sentences in the FrameNet corpus it builds a large set of rules that encode possible syntactic realizations of semantic frames FrameNet aims to document the range of semantic and syntactic combinatory possibilities valencesof each word in each of its senses through computerassisted annotation of example sentences and automatic tabulation of the annotation results The FrameNet lexical database currently contains more than 10000 lexical units defined below more than 6000 of which are fully annotated in nearly 800 hierarchicallyrelated semantic frames exemplified in more than 170000 annotated sentences See the FrameNet website for the latest statistics FrameNet has been used as a semantic role labeling used in applications such as information extraction machine translation event recognition sentiment analysis etc like the other publicly available lexical resources An example of a frame is Attack This frame has several frame elements The core frame elements are assailant and victim There are a large number of noncore frame elements These include Circumstances Containingevent Direction Duration Explanation Frequency Manner Means Place Purpose Result Time Weapon etc For each of these frame elements there can be seen in one or more annotated sentences Here is an example annotated sentence Assailant The gang ASSAULTED Victim him
Time during the drive to Rickmansworth Place in Hertfordshire Eq 74 The frame Attack is associated with a large number of associated units These include verbs and nouns Example verbs are ambush assail assault attack bomb bombard charge hit infiltrate invade raid storm and strike Examples of nouns are airstrike ambush assailant assault attack etc The frame Attack inherits from a frame called Intentionallyaffect It is inherited by frames Besieging Counterattack Invading and Suicideattack FrameNet annotates each frame element or its representation actually in at least three layers a frame element name eg Food a grammatical function eg Object and a phrase type eg NP Only the frame elements are shown in the Webbased interface to reduce visual clutter although all three are available in the XML downloads FrameNet has defined more than 1000 semantic frames These frames are linked together using frame relations which relate more general frames to specific ones This allows for reasoning about events and intentional actions Because frames are semantic they are often similar across languages For example frames about buying and selling involve frame elements Buyer Seller Goods and Money in every language FrameNets have been developed for languages such as Portuguese German Spanish Chinese Swedish and Japanese At the current time there are 1159 frames in FrameNet There are approximately 96 frame elements per frame There are 12595 lexical units of which 5135 are nouns 4816 are verbs 2268 are adjectives There are 121 lexical units per frame There have been some attempts at extending the coverage of FrameNet One such effort is by BIBREF83 who use a new broadcoverage lexicalsemantic resource called PPDB to add lemmas as pontential triggers for a frame and to automatically rewrite existing example sentences with these new triggers PPDB The Paraphrase Database is a lexical phrasal and syntactic paraphrase database BIBREF84 They use PPDBs lexical rules along with a 5gram KneserNey smoothed language model trained using KenLM BIBREF85 on the raw English sequence of the Annotated Gigaword corpus BIBREF86 PropBank BIBREF87 BIBREF88 BIBREF89 is an annotated corpus of verb propositions and their arguments PropBank does not annotate events or states of affairs described using nouns PropBankstyle annotations usually are closer to the syntactic level whereas FrameNetstyle annotations are more semantically motivated although as discussed earlier FrameNet provides layers of annotations including syntactic parses PropBank annotates one million words of the Wall Street Journal portion of the Penn Treebank BIBREF90 with predicateargument structure for verbs using semantic role labels for each verb argument Although the same tags are used across all verbs viz Arg0 Arg1 cdots Arg5 these tags have verbspecific meaning FrameNet requires that the use of a given argument label is consistent across different uses of a specific verb including its syntactic alternations Thus Arg1 italicized in John broke the window broke is the same window that is annotated as the Arg1 in The window broke even though it is the syntactic subject in one case and the syntactic object in another FrameNet does not guarantee that an argument label is used consistently across different verbs For example Arg2 is used as label to designate the destination of the verb bring but the extent of the verb rise Generally the arguments are simply listed in the order of their prominence for each verb However PropBank tries to use Arg0 as the consistent label for the prototypical agent and Arg1 for the prototypical patient as discussed in BIBREF7 PropBank divides words into lexemes using a very coarsegrained sense disambiguation scheme Two senses are considered distinct only if their argument labels are different In PropBank each word sense is called a frameset PropBanks model of predicateargument structure differs from dependency parsing In dependency parsing each phrase can be dependent only on one other phrase But in PropBank a single phrase can be arguments to several predicates PropBank provides a lexicon which divides each word into coarsegrained senses or framesets and provides examples usages in a variety of contexts For example the to make an attack criticize strongly sense of the predicate lemma or verb attack is given in Table 3 along with an example PropBank tries to keep rolesets consistent across related verbs Thus for example the buy roleset is similar to the purchase and sell rolesets See Table 4 taken from BIBREF87 One can clearly see that it may be possible to merge such similar framesets together to obtain something similar to the verb roles in FrameNets Commerce frameset Although similar PropBank differs from FrameNet we have discussed earlier in several ways PropBank is a resource focussed on verbs whereas FrameNet is focussed on frame semantics that generalizes descriptions across similar verbs as well as nouns and other words eg adjectives as discussed earlier PropBank was created with the idea of serving as training data to be used with machine learning algorithms for the task of semantic role labeling It requires all arguments to a verb to be syntactic constituents in nature In addition PropBank differentiates among senses of a verb if the senses take different sets of arguments There is a claim that due to such differences semantic role labeling is easier using a corpus annotated with PropBank type annotation compared to FrameNet type annotation VerbNet BIBREF91 BIBREF64 BIBREF92 attempts to provide a definitive resource for lexical entries for English verbs It is compatible with WordNet but has lexical entries with explicit syntactic and semantic information about verbs using Levins verb classes BIBREF45 It uses verb classes to capture generalizations and for efficient encoding of the lexicon Its syntactic frames for verb classes are represented using a finegrained variation of Lexicalized Tree Adjoining Grammers BIBREF93 BIBREF94 BIBREF65 augmented with semantic predicates allowing for creating compositional meanings for more complex constituents such as phrases and clauses VerbNet provides traditional semantic information such as thematic roles and semantic predicates with syntactic frames and selectional restrictions it also allows for extension of verb meaning through adjunction of particular syntactic phrases A verb entry corresponds to a set of classes corresponding to the different senses of the verb For each verb sense there is a verb class as well as specific selectional restrictions and semantic characteristics that may not be captured by class membership VerbNet also contains references to WordNet synsets Verb classes capture generalizations about verb behavior Each verb class lists the thematic roles that the predicateargument structure of its members allows and provides descriptions of the syntactic frames corresponding to allowed constructs with selectional restrictions given for each argument in each frame Verb classes are hierarchically organized It required some manual restructuring of Levins classes Each event E is decomposed into a threepart structure according to BIBREF11 BIBREF95 VernNet uses a time function for each predicate specifying whether the predicate is true during the preparatory culmination or consequentresult stage of an event This structure allows VerbNet to express the semantics of classes of verbs like Change of State verbs For example in the case of the verb break it is important to distinguish between the state of the object before the end of the action and the new state that results afterwards Table 5 is an example of a simplified VerbNet entry from its website The original VerbNet was extended using extensions proposed by BIBREF68 This resulted in the addition of a large number of new classes and also a much more comprehensive coverage of English verbs Table 6 provides statistics of VerbNets coverage in its initial version VerbNet as described in BIBREF91 BIBREF64 BIBREF96 and its current version as in its official Website The absence of any lexicon or resource that provides for accurate and comprehensive predicateargument structure or semantic role labels for English verbs has been long considered a critical element that was needed to produce robust natural language processors This was shown clearly by BIBREF97 who evaluated an EnglishKorean machine translation system The authors showed that among several factors impacting on the low quality of translations one that was most influential was the inability to predicateargument structure Even with a grammatical parse of the source sentence ad complete vocabulary coverage the translation was frequently bad This is because the authors found that although the parser recognized the constituents that are verb arguments it was unable to precisely assign the arguments to appropriate positions This led to garbled translations Simply preserving the proper argument position labels and not changing other things resulted in substantial improvement in acceptable translations When using one parser the improvement was 50 with a second parser the improvement was dramatic 300 Thus the purpose in developing lexical resources such as FrameNet and PropBank PropBank especially so is to provide for training data annotated with predicateargument positions with labels Such data can be used with machine learning techniques There have been attempts to integrate lexical resources to obtain more robust resources with wider coverage We discuss one such effort here BIBREF98 integrate FrameNet VerbNet and WordNet discussed earlier into a single and richer resource with the goal of enabling robust semantic parsing The reason for building connections among the three lexical resources is that similar syntactic patterns often introduce different semantic interpretations and similar meanings can be realized in many different ways The improved resource provides three enhancements 1 It extends the coverage of FrameNet 2 It augments VerbNets lexicon with frame semantics and 3 It implements selectional restrictions using WordNet semantic classes They use knowledge about words and concepts from WordNet information about different situations from FrameNet and verb lexicon with selectional restrictions from VerbNet They extract syntactic features and corresponding semantic roles from all annotated sentences in FrameNet to build a large set of rules that encode the possible syntactic realization of semantic frames They identify the VerbNet verb class that corresponds to a FrameNet frame and this allows them to parse sentences that include verbs not covered by FrameNet This they do by exploiting a transitivity relation via VerbNet classes verbs that belong to the same Levin classes are likely to share the same FrameNet frame and their frame semantics can be analyzed even if not explicitly defined in FrameNet They use information from WordNet in several stages in the parsing process The argument constraints encoded in VerbNet eg animate concrete are mapped to WordNet semantic classes to provide selectional restrictions for better frame selection and role labeling in a semantic parser In addition the mapping between WordNet verb entries and FrameNet lexical units allows them to extend the parser coverage by assigning common frames to verbs that are related in meaning according to the WordNet semantic hierarchies The authors found that their mapping algorithms produced 8125 correct assignment of VerbNet entries with a correct FrameNet frame They also were able to map 7822 VerbNet predicateargument structures with some syntactic features and selectional restrictions to the corresponding FrameNet semantic roles The OntoNotes project BIBREF99 BIBREF100 BIBREF101 has created an infrastructure for much richer domain independent representation of shallow meaning for use in natural language processing tasks including event detection and extraction in English Chinese and Arabic OntoNotes annotates documents at several layers syntax propositions word senses including nominalizations and eventive noun senses named entities ontology linking and coreference It has been designed to be a wellannotated largescale corpus from which machine learning programs can learn many different aspects of meaning felicitously OntoNotes uses Penn TreeBank parses BIBREF102 PropBank propositional structures BIBREF87 BIBREF88 BIBREF89 on top of Penn Treebank and uses the Omega ontology BIBREF103 for word sense disambiguation As we know the Penn Treebank is annotated with information from which one can extract predicateargument structures The developers of OntoNotes use a parser that recovers these annotations BIBREF104 The Penn Treebank also has markers for empty categories that represent displaced constituents Thus to create OntoNotes its developers use another parser BIBREF105 BIBREF106 to extract function words They also use a maximum entropy learner and voted preceptons to recover empty categories PropBank as we know annotates the onemillion word Wall Street Journal part of the Penn Treebank with semantic argument structures for verbs As we have noted earlier the creators of OntoNote and others have discovered that WordNets very fine grained sense distinctions make interannotator agreement or good tagging performance difficult To achieve better performance OntoNotes uses a method BIBREF107 BIBREF108 for sense inventory creation and annotation that includes links between grouped word senses and the Omega ontology BIBREF103 OntoNotes represents sense distinctions in a hierarchical structure like a decision tree where coarsegrained distinctions are made at the root and increasingly finegrained restrictions until reaching WordNet senses at the leaves Sets of senses under specific nodes of the tree are grouped together into single entries along with syntactic and semantic criteria for their groupings these are presented to annotators for improved annotation agreement obtaining up to 90 interannotator agreement OntoNote follows a similar method for annotation of nouns To allow access to additional information such as subsumption property inheritance predicate frames from other sources links to instances and so on OntoNotes also links to an ontology This requires decomposing the hierarchical structure of OntoNotes into subtrees which then can be inserted at the appropriate conceptual node in the ontology OntoNotes represents its terms in the Omega ontology BIBREF103 Omega has been assembled by merging a variety of sources such as WordNet Mikrokosmos BIBREF109 and a few upper ontologies such as DOLCE BIBREF110 SUMO BIBREF111 and Penman Upper Model BIBREF112 OntoNote also includes and crossreferences verb frames from PropBank FrameNet WordNet and Lexical Conceptual Structures BIBREF113 OntoNotes also has coreferences It connects coreferring instances of specific referring expressions primarily NPs that introduce or access a discourse entity For the purpose of our paper it is important to know that OntoNotes tries to annotate nouns that carry predicate structure eg those whose structure is derived from their verbal form In particular OntoNotes annotates nominalization and eventive senses of nouns OntoNotes applies two strict criteria for identifying a sense of a noun as a nominalization BIBREF101 The noun must relate transparently to a verb and typically display a nominalizing morpheme such as ment governgovernment ion contributecontribution though it allows some zeroderived nouns such as kill the noun derived from kill the verb The noun must be able to be used in a clausal noun phrase with its core verbal arguments related by semantically empty or very light licensers such as genitive markers as in The Romans destruction of the city or with the verbs usual particle or prepositional satellites as in Johns longing for fame and fortune Just like nominalization senses OntoNotes has strict definition of eventive senses They have two definitional criteria 1 and 2 and a diagnostic test 3 for determining if a noun sense is eventive Activity causing a change of state A noun sense is eventive when it refers to a single unbroken activity or process occurring during a specific time period that effects a change in the world of discourse Reference to activity proper The noun must refer to the actual activity or process not merely to the result of the activity or the process The noun patterns with eventive predicates in the have test BIBREF114 describes the following heuristic lexicosyntactic diagnostic test to apply to many nouns The test has four parts to it as discussed briefly below Create a natural sounding sentence using the construction X had NP where NP is a noun phrase headed by the noun in question eg John had a party Check if the sentence can be used in present progressive as in John is having a party If the sentence is felicitous it adds to the noun being inventive If it sounds odd it adds to the evidence that the noun is stative Check if the sentence can be used in a pseudocleft construction such as What John did was have a party If it is felicitous the noun is more likely to be eventive If not it is more likely to be stative Check if the sentence suggests iterative or habitual action using the simple present such as John has a party every Friday If so it adds evidence that the noun is eventive If the sentence suggests that the situation is taking place at that very moment that it is uttered it adds evidence that the noun is stative as in John has a cold In addition to OntoNotes there have been other efforts at obtaining largescale annotated corpora such at the GLARF project BIBREF115 that tries to capture information from various Treebanks and superimpose a predicate argument structure The Unified Linguistic Annotation ULA project BIBREF116 is a collaborative effort that aims to merge PropBank NomBank the Penn Discourse Treebank BIBREF117 and TimeBank BIBREF118 with coreference information Different models of events have been used in computational linguistics work geared toward information extraction We discuss TimeML events next followed by events We discuss biomedical event extraction in Section and extraction of events from Twitter in Section TimeML is a rich specification language for event and temporal expressions in natural language text In the TimeML BIBREF119 BIBREF120 annotation scheme an event is a general term for situations that happen or occur Events can be punctual or momentary or last for a period of time Events in TimeML format may also include predicates describing states or circumstances in which something holds true Only those states that participate in an opposition structure as discussed in Subsection UID42 are annotated In general an event can be expressed in terms of verbs nominalizations adjectives predicative clauses or prepositional phrases TimeML allows an event annotated with the EVENT tag to be one of seven types occurrence state report iaction istate aspectual and perception The first five are special cases The last two Occurrence and State are used for general cases that do not fall in the special ones Reporting A reporting event describes an action declaring something narrating an event informing about a situation and so on Some verbs which express this kind of event are say report tell explain and state An example sentence with the verb say is Punongbayan said that the 4795foothigh volcano was spewing gases up to 1800 degrees IAction I stands for intensional According to the TimeML annotation guidelines an iaction is a dynamic event that takes an eventdenoting argument which must be explicitly present in the text Examples of verbs that are used to express iactions include attempt try promise and offer An example sentence with the verb try is Companies such as Microsoft or a combined worldcom MCI are trying to monopolize Internet access IState IState stands for intensional state Like an IAction an IState event takes an argument that expresses an event Unlike an IAction the IState class is used for events which are states An example sentence that uses the verb believe is We believe that his words cannot distract the world from the facts of Iraqi aggression Other verbs used to express istates include intend want and think Aspectual An aspectual predicate takes an event as an argument and points to a part of the temporal structure of the event Such a part may be the beginning the middle or the end of an event Verbs such as begin finish and continue are such aspectual predicates An example sentence with the verb begin is All nonessential personnel should begin evacuating the sprawling base Perception This class includes events involving the physical perception of another event Such events are typically expressed by verbs such as see watch glimpse hear listen and overhear An example sentence with the verb see is Witnesses tell Birmingham police they saw a man running Occurrence An occurrence is a general event that occurs or happens in the world An example of an occurrence is given in the following sentence The Defense Ministry said 16 planes have landed so far with protective equipment against biological and chemical warfare The occurrence has been highlighted in bold State A state describes circumstances in which something obtains or holds true An example sentence that shows two states is It is the US economic and political embargo which has kept Cuba in a box TimeML allows one to mark up temporal expressions using the TIMEX3 tag Temporal expressions are of three types a Fully specified temporal expressions such as June 11 2013 b Underspecified temporal expressions such as Monday c Durations such as three days TimeML uses the SIGNAL tag to annotate sections of text usually function words that indicate how temporal objects are related to each other The material marked by SIGNAL may contain different types of linguistic elements indicators of temporal relations such as prepositions such as on and during other temporal connectives such as when etc The TIMEX3 and SIGNAL tags were introduced by BIBREF122 BIBREF123 A major innovation of TimeML is the LINK tags that encode relations between temporal elements of a document and also help establish ordering between the events in a document There are three types of links TLINK showing temporal relationships between events or between an event and a time SLINK or a subordination link to show context that introduces relations between two events or an event and a signal ALINK or an aspectual link to show relationship between an aspectual event and its argument event TLINK allows for 13 temporal relations introduced by BIBREF124 BIBREF125 SLINK is used to express contexts such as use of modal verbs negatives positive and negative evidential relations factives which require the event argument to be true and counterfactives which require the event argument to be false ALINK expresses initiation culmination termination or continuation relationships between an event and its argument event Finally TimeML is able to express three types of causal relations an event causing an event an entity causing an event and the special situation where the use of the discourse marker and as a signal to introduce a TLINK indicating that one event happened before another as in He kicked the ball and it rose into the air The creators of TimeML have spent significant efforts to develop a fairly large corpus annotated with TimeML tags This corpus is called the TIMEBANK corpus BIBREF118 and has 300 annotated articles This corpus has been used to learn to extract events and temporal relations among events In the ACE model only interesting events are annotated in corpora and thus extracted by a trained program ACE annotators specify the event types they want to be extracted For example in one information extraction contest an ACE 2005 event was of 8 types each with one has one or more subtypes The types are given below Maybe give some examples Life Beborn marry divorce injure and die Movement Transport Transaction Transferownership Transfer money Business Startorganization Mergeorganization Declarebankruptcy Contact Meet Phonewrite Conflict Attack demonstrate Personnel Start position End position Nominate Elect and Justice ArrestJail ReleaseParole TrialHearing ChargeIndict Sue Convict Sentence Fine Execute Extradite Acquit Appeal Pardon Each event also has four categorial attributes The attributes and their values are given below Modality Asserted and Other where Other includes but is not limited to Believed events Hypothetical events Commanded and requested events Threatened Proposed and Discussed events and Promised events Polarity Positive and Negative Genericity Specific Generic Tense Past Present Future and Unspecified ACE events have arguments Each event type has a set of possible argument roles which may be filled by entities time expressions or other values Each event type has a set of possible argument roles There are a total of 35 role types although no single event can have all 35 roles A complete description of which roles go with which event type can be found in the annotation guidelines for ACE 2005 events In an ACE event time is noted if when explicitly given Others have defined events or event profiles themselves to suit their purpose For example Cybulska and Vossen BIBREF126 BIBREF127 describe an historical information extraction system where they extract event and participant information from Dutch historical archives They extract information using what they call profiles For example they have developed 402 profiles for event extraction although they use only 22 of them in the reported system For extraction of participants they use 314 profiles They also 43 temporal profiles and 23 location profiles to extract temporal and locational information Profiles are created using semantic and syntactic information as well as information gleaned from Wordnet BIBREF50 The ACE annotation scheme discussed earlier was developed by NIST in 1999 and the ERE Entities Relations and Events scheme was defined as a simpler version of ACE BIBREF128 One of EREs goals is also to make annotating easier and annotations more consistent across annotators ERE attempts to achieve these goals by removing the most problematic annotations in ACE and consolidating others We will discuss the three types annotations now Entities Relations and Events For example consider Entities ACE and ERE both have Person Organization GeoPolitical Entity and Location as types of entities ACE has two additional types Weapon and Vehicle which ERE does not have ERE doesnt distinguish between Facility and Location types and merge them into Location ERE has a type called Title for titles honorifics roles and professions ACE has subtypes for entity mentions which ERE does not In addition to subtypes ACE classifies entity mentions into classes eg Specific Generic and Underspecified ERE has only Specific ACE and ERE also have differences in how extents and heads are marked and levels of entity mentions The purpose of Relation annotation in both ACE and ERE is to extract a representation of the meaning of the text not necessarily tied to the underlying syntactic or lexical representation Both schemes include Physical PartWhole Affiliation and Social relations although the details are a bit different Both tag relations inside a single sentence and tags only explicit mentions Nesting of tags is not allowed Each relation can have up to two ordered Argument slots Neither model tags negative relations However ERE annotates only asserted real events whereas ACE allows others as well eg Believed Events Hypothetical Events Desired Events and Requested Events There is no explicit trigger word in ACE which annotates the full clause that serves as the trigger for a relation whereas ERE attempts to minimize the annotated span by allowing for the tagging of an optional trigger word or phrase ACE justifies tagging of each Relation by assigning Syntactic Clauses to them such as Possessive PreMod and Coordination The three types of Relations inn ERE and ACE have subtypes Physical PartWhole and Social and Affiliation but ERE collapses ACE types and subtypes to make them more concise possibly less specific BIBREF128 discuss the similarities and differences between ACE and ERE in detail Events in both ACE and ERE are defined as specific occurrences involving specific participants Like entities and relations ERE is less specific and simplified compared to ACE Both annotation schemes annotate the same event types Life Movement Transaction Business Conflict Contact Personnel and Justice BIBREF129 use another annotation scheme called Richer Event Description RED synthesizing coreference BIBREF130 BIBREF131 and THYMETimeML temporal relations BIBREF132 BIBREF129 discusses challenges in annotating documents with the RED schema in particular causeeffect relations The usual way to annotate causeeffect relations is using the counterfactual definition of causation in philosophy BIBREF133 BIBREF134 X causes Y means if X had not occurred Y would not have happened However BIBREF129 found that this definition leads to many difficult and sometimes erroneous annotations and thats why while performing RED annotations they used another definition BIBREF135 BIBREF136 which treats causation as a local relation depending on intrinsic properties of the events and what goes on between then and nothing else In particular the definition is X causes Y means Y was inevitable given X In fact in the annotations performed by BIBREF129 they use the new definition to make judgements but use the old definition as a precondition to the new one The Knowledge Base Population Track TACBKP was started by NIST in 2009 to evaluate knowledge bases KBs created from the output of information extraction systems The primary tasks are a Entity linkinglinking extracted entities to entities in knowledge bases and b Slot fillingadding information to entity profiles information that is missing from the knowledge base BIBREF137 Wikipedia articles have been used as reference knowledge bases in evaluating TACKBP tasks For example given an entity the goal is to identify individual nuggets of information using a fixed list of inventory relations and attributes For example given a celebrity name the task is to identify attributes such as schools attended occupations important jobs held names of immediate family members etc and then insert them into the knowledge base Many people compare slot filling to answering a fixed set of questions obtaining the answers and filling in the appropriate slots in the knowledge base Slot filling in TACKBP differs from extraction in ACE and ERE notations in several ways such as TACKBP seeks out information for named entities only chiefly PERs and ORGs TACKBP seeks to obtain values for slots and not mentions and events are handled as uncorrelated slots and assessment is like in questionanswering Our focus on this paper has been on extracting events and we know that to extract events properly we need to explicitly extract event mentions and also extract associated attributes such as agents locations time of occurrence duration etc Rather than explicitly modeling events TACKBP does so implicitly as it captures various relations associated with for example the agent of the event For example given a sentence Jobs is the founder and CEO of Apple TACKBP may pick Apple as the focal entity and identify Jobs as the filler of its founder slot and Jobs as the filler of its CEO slot However an ACE or ERE annotation program will ideally pick the event as Founding with Jobs as an argument say the first argument or arg1 or the Actor of the event and Apple as another argument say arg2 Many even extraction systems have been built over the years A big motivator for development of event extraction systems seem to be various contests that are held every few years although there has been considerable amount of noncontest related research as well Although we discuss extraction of events represented by various formats the methods are not really different from each other That is why we discuss TimeML events in more detail and present the others briefly in this section We describe a few of the approaches that have been used for extracting TimeML type events Quite a few papers that attempt to do so have been published BIBREF121 BIBREF138 BIBREF139 BIBREF140 BIBREF141 and we pick just a few representative papers BIBREF121 implemented an event and event feature extraction system called EVITA and showed that a linguistically motivated rulebased system with some help using statistical disambiguation perfumed well on this task Evita is claimed to be a unique tool within the TimeML framework in that it is very general being not based on any preestablished list of event patterns and being domainindependent Evita can also identify based on linguistic cues grammatical information associated with event referring expressions such as tense aspect polarity and modality as stated in the TimeML specification Evita does not directly identify event participants but can work with named entity taggers to link arguments to events Evita breaks down the event recognition problem to a number of subtasks Evita preprocesses the input text using the Alembic Workbench POS tagger lemmatizer to find lexical stems and chunkier to obtain phrase chunks verbal nominal and adjectival the three that are commonly used as event referring expressions BIBREF142 For each subtask after preprocessing it combines linguistic and statisticallybased knowledge Linguistic knowledge is used in local and limited contexts such as verb phrases and to extract morphological information Statistical knowledge is used to disambiguate nominal events The subtasks in event recognition in Evita are determination of event candidates and then the events identification of grammatical features of events additional clustering of event chunks for event detection and grammatical feature identification in some situations For event identification Evita looks at the lexical items tagged by the preprocessing step It uses different strategies for identifying events in the three categories verbs nouns and adjectives For identifying events in a verbal chunk Evita performs lexical lookup and limited contextual parsing in order to exclude weak stative predicates such as be and generics such as verbs with bare plural subjects Identifying events expressed by nouns involves a phase of lexical lookup and disambiguation using WordNet and by mapping events SemCor and TimeBank 12 to WordNet synsets Evita consults 25 subtrees from WordNet where all the synsets denote events One of these the largest is the tree underneath the sunset that contains the word event If the result of this lexical lookup is not conclusive ie if a nominal occurs as both event and nonevent in WordNet a disambiguation step is applied based on rules learned by a Bayesian classifier trained on SemCor To identify events from adjectives Evita uses a conservative approach where it tags only those adjectives that were annotated as such in TimeBank 12 when such adjectives occur as the head of a predicative complement To identify grammatical features eg tense aspect modality polarity and nonfinite morphology of events Evita uses different procedures based on the part of speech of the event denoting expression But in general it involves using morphology pattern matching and applying a large number eg 140 such rules for verbal chunks simple linguistic rules However to identify the event class it performs lexical lookup and word sense disambiguation Clustering is used to identify chunks from the preprocessing stage that contribute information about the same event eg when some modal auxiliaries and use of copular verbs Clustering is activated by specific triggers such as the presence of a chunk headed by an auxiliary verb or a copular verb Evaluation of Evita was performed by comparing its performance against TimeBanck 12 The reported performance was that Evita had 7403 precision 8731 recall and an Fmeasure of 8012 in event detection Accuracy precision for polarity aspect and modality was over 97 in each case BIBREF138 use TimeBankannotated events and identify which words and phrases are events They consider event identification as a classification task that works on wordchunks They use the BIO formulation that augments each class label with whether the word is the Beginning Inside or Outside of a chunk BIBREF143 They use a number of features categorized into various classes for machine learning These include affix features eg three or four characters from the beginning and end of each word morphological features eg base form of the word and base form of any verb associated with the word if the word is a noun or gerund for example wordclass features eg POS tags which noun or verb cluster a word belongs to where the clusters are obtained using cooccurrence statistics in the manner of BIBREF144 governing features eg governing light verb determiner typecardinal or genitive for example and temporal features eg a BIO label indicating whether the word is contained inside a TIMEX2 temporal annotation a governing temporal preposition like since till before etc They also use negation features and Wordnet hypernyms as features For classification they use the TinySVM implementation of SVM by BIBREF145 They perform experiments with TimeBank documents using a 90 stratified sampling for training and 10 for testing They obtained 82 precision and 71 recall with an Fmeasure of 0759 They did compare their algorithm with an version of Evita they programmed themselves this system obtained 0727 Fmeasure and thus Bethard and Martins approached performed about 4 better When Bethard and Martins system was extended to identifying semantic class of an event it did not perform as well obtaining precision of 67 recall of 51 and Fmeasure of 0317 However the system was much better at identifying the classes of verbs with Fmeasure of 0707 compared to finding classes of nouns with an Fmeasure of 0337 only TIPSem Temporal Information Processing based on Semantic information is a system that participated in the TemEval2 Competition BIBREF146 in 2010 which presented several tasks to participants although we are primarily interested in the event extraction task TIPSem achieved the best F1 score in all the tasks in TempEval2 for Spanish and for English it obtained the best F1 metric in the task of extracting events which required the recognition and classification of events as defined by TimeML EVENT tag TIPSem learns Conditional Random Field CRF models using features for different language analysis levels although the approach focuses on semantic information primarily semantic roles and semantic networks Conditional Random Fields present a popular and efficient machine learning technique for supervised sequence labeling BIBREF147 The features used for training the CRF models are similar to one used by others such as Bethard and Martin although details vary However they add semantic role labels to the mix of features In particular they identify roles for each governing verb Semantic role labeling BIBREF148 BIBREF149 BIBREF150 identifies for each predicate in a sentence semantic roles and determine their arguments agent patient etc and their adjuncts locative temporal etc The previous two features were combined in TIPSem to capture the relation between them The authors think this combination introduces additional information by distinguishing roles that are dependent on different verbs The importance of this falls especially on the numbered roles A0 A1 etc meaning different things when depending on different verbs The test corpus consists of 17K words for English and 10K words for Spanish provided by the organizers of TempEval2 For English they obtained precision of 081 recall of 086 and Fmeasure of 083 for recognition with event classification accuracy of 079 for Spanish the numbers were 090 086 088 for recognition and 066 for classification accuracy We provide these numbers although we know that it is difficult to compare one system with another for example Bethard and Martins system with TIPSem since the corpora used are difference As in TempEval2 TempEval3 BIBREF151 participants took part in a task where they had to determine the extent of the events in a text as defined by the TimeML EVENT tag In addition systems may determine the value of the features CLASS TENSE ASPECT POLARITY MODALITY and also identify if the event is a main event or not The main attribute to annotate is CLASS The TempEval3 dataset was mostly automatically generated using a temporal merging system The halfmillion token text corpus from English Gigaword2 was automatically annotated using TIPSem TIPSemB BIBREF140 and TRIOS BIBREF152 These systems were retrained on the TimeBank and AQUAINT corpus using the TimeML temporal relation set The outputs of these three stateoftheart system were merged using a merging algorithm BIBREF151 The dataset used comprised about 500K tokens of silver standard data and about 100K tokens of gold standard data for training compared to the corpus of roughly 50K tokens corpus used in TempEval 1 and 2 There were seven participants and all the participants except one used machine learning approaches The top performing system was ATT1 BIBREF153 with precision 8144 recall 8067 and F1 of 8105 for event recognition and 7188 for event classification Close behind was the ATT2 system BIBREF153 with precision recall and F1 of 8102 8081 and 8092 for event recognition respectively and 7110 for event classification Both systems used MaxEnt classifiers with Obviously different sets of features impact on the performance of event recognition and classification BIBREF154 BIBREF155 BIBREF156 In particular BIBREF157 also examined performance based on different sizes of ngrams in a small scale n13 Inspired by such work in building the ATT systems the creators intended to systematically investigate the performance of various models and for each task they trained twelve models exploring these two dimensions three of which we submitted for TempEval3 and of these three performed among the top ten in TempEval3 Competition The ATT1 models include lexical syntactic and semantic features ATT2 models include only lexical and syntactic features and ATT3 models include only lexical features ie words They experimented with context windows of 0 1 3 and 7 words preceding and following the token to be labeled For each window size they trained ATT1 ATT2 and ATT3 models The ATT1 models had 18 basic features per token in the context window for up to 15 tokens so up to 270 basic feaures for each token to be labeled The ATT2 models had 16 basic features per token in the context window so up to 240 basic features for each token to be labeled The ATT3 models had just 1 basic feature per token in the context window so up to 15 basic features for each token to be labeled For event extraction and classification and event feature classification they used the efficient binary MaxEnt classifier for multiclass classification available in the machine learning toolkit LLAMA BIBREF158 They also used LLAMAs preprocessor to build unigram bigram and trigram extended features from basic features For event and time expression extraction they trained BIO classifiers It was found that the absence of semantic features causes only small changes in F1 The absence of syntactic features causes F1 to drop slightly less than 25 for all but the smallest window size with recall decreasing while precision improves somewhat F1 is also impacted minimally by the absence of semantic features and about 25 by the absence of syntactic features for all but the smallest window size1 A was surprising that that ATT3 models that use words only performed well especially in terms of precision precision recall and F2 of 8195 7557 and 7863 for event recognition and 6955 F1 for event classification It is also surprising that the words only models with window sizes of 3 and 7 performed as well as the models with a window size of 15 These results are promising for big data text analytics where there may not be time to do heavy preprocessing of input text or to train large models We have already discussed several approaches to extraction of events represented by TimeML representation Extracting events that use other representation is not very different but different representations have existed and exist and therefore we briefly present some such attempts Some of these predate the time TimeML became popular For example the various Message Understanding Conferences MUCs seven were organize by DARPA from 1987 to 1997 asked participants to extract a small number of relations and events For instance MUC7 the last one called for the extraction of 3 relations personemployer makerproduct and organizationlocation and 1 event spacecraft launches For example the MUC7 and ACE events did not attempt to cover all events but a limited number of prespecified event types or classes that participants need to detect during a contest period based on which the contestants submit papers for publication The number and the type of arguments covered are also limited and are prespecified before the competitions start BIBREF159 discuss a relation and event extraction system covering areas such as political financial business military and liferelated topics The system consists of tagging modules a coreference resolution module and a temple generation module They store the events generated in MUC7 BIBREF160 format which is not very unlike the ACE format Events are extracted along with their event participants eg who did what to whom when and where For example for a BUYING event REES extracts the buyer the artifact the seller and the time and location of the BUYING event REES covers 61 types of events There are 39 types of relations The tagging component consists of three modules NameTagger NPTagger and EventTagger Each module relies on the same patternbased extraction engine but uses different sets of patterns The NameTagger recognizes names of people organizations places and artifacts only vehicles in the implemented system The NPTagger then takes the output of the NameTagger and first recognizes nonrecursive Base Noun Phrase BNP BIBREF143 and then complex NPs for only the four main semantic types of NPs ie Person Organization Location and Artifact vehicle drug and weapon The EventTagger recognizes events applying its lexicondriven syntacticallybased generic patterns REES uses a declarative lexicondriven approach This approach requires a lexicon entry for each eventdenoting word which is generally a verb The lexicon entry specifies the syntactic and semantic restrictions on the verbs arguments After the tagging phase REES sends the output through a rulebased coreference resolution module that resolves definite noun phrases of Organization Person and Location types and singular personal pronouns REES outputs the extracted information in the form of either MUCstyle templates or XML One of the challenges of event extraction is to be able to recognize and merge those event descriptions which refer to the same event The Template Generation module uses a set of declarative customizable rules to merge co referring events into a single event The systems recall precision and FMeasure scores for the training set 200 texts and the blind set 208 texts from about a dozen news sources On the socalled training set the system achieved Fmeasure of 6475 for event extraction and 7535 for relation extraction The blind set FMeasure for 31 types of relations 7395 As seen earlier in Subsection ACE Events the way ACE events are specified they have a lot of details that need to be extracted BIBREF161 follows several steps to extract events and uses machine learning algorithms at every step The steps are preprocessing of text data identifying anchors assigning event types extracting arguments identifying attributes of events such as modality polarity genericity and tense and finally identifyings event coreferents of the same individuated event In other words Ahn attempts to cover all the steps sequentially making the simplifying assumption that they are unrelated to each other A single place in a textual document which may be considered the primary place of reference or discussion about an event is called the event anchor Ahn treats finding the anchor for an event within a document as a word classification task using a twostage classification process He uses a binary classifier to classify a word as being an event anchor or not He then classifies those identified as event anchors into one of the event classes Ahn used one classifier for binary classification and then another classifiers to classify only the positive instances Ahn treats identifying event arguments as a pair classification task Each event mention is paired with each of the entity time and value mentions occurring in the same sentence to form a single classification instance There were 35 role types in the ACE 2006 task but no event type allows arguments of all types Each event type had its own set of allowable roles The classification experiment run was a multiclass classification where a separate multiclass classifier was used for each event type Ahn trains a separate classifier for each attribute Genericity modality and polarity are each binary classification tasks while tense is a multiclass task For event coreference Ahn follows the approach given in BIBREF162 Each event mention in a document is paired with every other event mention and a classifier assigns to each pair of mentions the probability that the paired mentions corefer These probabilities are used in a lefttoright entity linking algorithm in which each mention is compared with all alreadyestablished events ie event mention clusters to determine whether it should be added to an existing event or start a new one Ahn experimented with various combinations of a maximum entropy classifier MegaM BIBREF163 and a memorybased nearest neighbor classifier called TIMBL BIBREF164 for the various tasks The ACE specification provided a way to measure the performance of an event extraction system The evaluation called ACE value is obtained by scoring each of the component tasks individually and then obtaining a normalized summary value Overall using the best learned classifiers for the various subtasks they achieve an ACE value score of 223 where the maximum score is 100 The value is low but other systems at the time had comparable performance BIBREF165 describe an approach to classify sentences in a document as specifying one or more events from a certain ACE 2006 class They classify each sentence in a document as containing an instance of a certain type or not Unlike BIBREF161 they are not interested in identifying arguments or any additional processing Also unlike Ahn who classifies each word as possibly being an event anchor for a specific type of ACE event Naughton et al perform a classification of each sentence in a document as being an onevent sentence or an offevent sentence An onevent sentence is a sentence that contains one or more instances of the target event type An offevent sentence is a sentence that does not contain any instances of the target event type They use several approaches to classify a sentence as onevent or offevent These include the following SVMbased machine learning BIBREF166 language modeling approaches using count smoothing and a manual approach which looks for Wordnet synonyms or hypernyms of certain trigger words in a sentence Naughton et al found that 1 use of a large number of features to start but then reduction of these features using information gain and 2 use of SVM produces the best results although all versions of SVM ie with all features with no reduction just the terms without complex features or a selection of terms and other features all work very well A surprising finding was that the manual triggerbased classification approach worked almost as well as the SVM based approaches When an event is mentioned in several places within a document finding which references are to the same event is called determining event coreference These are coreferents to the event Determining when two event mentions in text talk about the same event or corefer is a difficult problem As BIBREF167 point out that the events may be actual occurrences or hypothetical events BIBREF162 present a statistical languageindependent framework for identifying and tracking named nominal and pronominal references to entities within unrestricted text documents and chaining them into groups corresponding to each logical entity present in the text The model can use arbitrary feature types integrating a variety of lexical syntactic and semantic features The mention detection model also uses feature streams derived from different named entity classifiers For mention detection the approach used is based on a loglinear Maximum Entropy classifier BIBREF168 and a linear Robust Risk Minimization classifier BIBREF169 Then they use a MaxEnt model for predicting whether a mention should or should not be linked to an existing entity and to build entity chains Both classifiers can integrate arbitrary types of information and are converted into suitable for sequence classification for both tasks For entity tracking the process works from left to right It starts with an initial entity consisting of the first mention of a document and the next mention is processed by either linking it with one of the existing entities or starting a new entity Atomic features used by the entity linking algorithm include string match context mention count distance between the two mentions in words and sentences editing distance properties of pronouns such gender number and reflexiveness The best combination of features was able to obtain slightly more than 73 F1 value using both RRM and MaxEnt algorithms for mention detection Entity tracking was evaluated in terms of what is called the ACE value A gauge of the performance of an EDT system is the ACE value a measure developed especially for this purpose It estimates the normalized weighted cost of detection of specificonly entities in terms of misses false alarms and substitution errors Florian et al achieved an ACE value of 734 out of 100 for the MaxEnt classifier and 697 for the RRM classifier BIBREF161 follows the approach by BIBREF162 for entity coreference determination He uses a binary classifier to determine if any two event mentions in the document refer to the same event Thus he pairs each event with every other event and the classifier assigns each pair a probability that they are the same The probability is used with entity linkingmatching algorithm to determine event coreference Event coreferencing requires event mentions to be clustered to event clusters Event mentions in a cluster are the same event The system described here obtained an ACE value of between 8891 where the maximum ACE value is 100 BIBREF161 uses the following features for event coreference determination Let the candidate be the earlier event mention and the anaphor be the later mention The anchors for the candidate and the anaphor the full or original form and also in lowercase and POS tag Type of the candidate event and the anaphor event Depth of candidate anchor word in parse tree Distance between the candidate and anchor measured in sentences Number heads and roles of shared arguments etc Supervised approaches to solving event coreference use linguistic properties to decide if a pair of event mentions is coreferential BIBREF170 BIBREF171 BIBREF161 BIBREF172 These models depend on labeled training data and annotating a large corpus with event coreference information requires substantial manual effort In addition since these models make local pairwise decisions they are unable to capture a global event distribution at topic or document collection level BIBREF173 present how nonparametric Bayesian models can be applied to an opendomain event coreference task in an unsupervised manner The first model extends the hierarchical Dirichlet process BIBREF174 to take into account additional properties associated with event mentions The second model overcomes some of the limitations of the first model and uses the infinite factorial hidden Markov model BIBREF175 coupled to the infinite hidden Markov model BIBREF176 in order to consider a potentially infinite number of features associated with observable objects which are event mentions here perform an automatic selection of the most salient features and capture the structural dependencies of observable objects or event mentions at the discourse level Furthermore both models can work with a potentially infinite number of categorical outcomes or events in this case Two event mentions corefer if they have the same event properties and share the same event participants To find coreferring event mentions Bejan and Harabagiu describe words that may be possible event mentions with lexical features class features such as POS and event classes such BIBREF119 as occurrence state and action Wordnet features semantic features obtained by a semantic parse BIBREF177 and the predicate argument structures encoded in PropBank annotations BIBREF89 as well as semantic annotations encoded in the FrameNet corpus BIBREF79 The first model represents each event mention by a finite number of feature types and is also inspired by the Bayesian model proposed by BIBREF178 In this model a Dirichlet process DP BIBREF179 is associated with each document and each mixture component ie event is shared across documents since In the process of generating an event mention an event index z is first sampled by using a mech anism that facilitates sampling from a prior for in finite mixture models called the Chinese restaurant franchise CRF representation as reported in BIBREF174 The second model they use is called the iHMMiFHMM model infinite hidden Markov modelinfinite factorial hidden Markov model The iFHMM framework uses the Markov Indian buffet process mIBP BIBREF175 in order to represent each object as a sparse subset of a potentially unbounded set of latent features BIBREF180 BIBREF181 Specifically the mIBP defines a distribution over an unbounded set of binary Markov chains where each chain can be associated with a binary latent feature that evolves over time according to Markov dynamics The iFHMM allows a flexible representation of the latent structure by letting the number of parallel Markov chains be learned from data it cannot be used where the number of clustering components is infinite An iHMM represents a nonparametric extension of the hidden Markov model HMM BIBREF182 that allows performing inference on an infinite number of states To further increase the representational power for modeling discrete time series data they develop a nonparametric extension that combines the best of the two models and lets the two parameters M and K be learned from data Each step in the new iHMMiFHMM generative process is performed in two phases i the latent feature variables from the iFHMM framework are sampled using the mIBP mechanism and ii the features sampled so far which become observable during this second phase are used in an adapted version of the beam sampling algorithm BIBREF183 to infer the clustering components ie latent events They report results in terms of recall R precision P and Fscore F by employing the mentionbased B3 metric BIBREF184 the entitybased CEAF metric BIBREF185 and the pairwise F1 PW metric Their experiments for show that both of these models work well when the feature and cluster numbers are treated as free parameters and the selection of feature values is performed automatically BIBREF167 argue that events represent complex phenomena and can therefore corefer fully being identical like other researchers have discussed or corefer partially being quasiidentical or only partially identical Two event mentions fully corefer if their activity event or state representation is identical in terms of all features used eg agent location or time Two event mentions are quasiidentical if they partially corefer ie most features are the same but there may be additional details to one or the other When two events fully corefer Hovy et al state they may be lexically identical ie the same senses of the same word eg destroy and destruction synonymous words one mention is a wider reading of the other eg The attack took place yesterday and The bombing killed four people one mention is a paraphrase of the other with possibly some syntactic differences eg He went to Boston and He came to Boston and one mention deictically refers to the other eg the party and that event Quasiidentity or partial coreference may arise in two ways membership identity or subevent identity Membership identity occurs when one mention say A is a set of multiple instances of the same type of event and the other mention say B is one of the individual events in A eg I attended three parties last week The first one was the best Subevent identity is found when one mention say A is a stereotypical sequence or script of events whereas the other mention say B is one of the actions or events within the script eg The family ate at the restaurant The dad paid the waitress at the end Hovy et alattempt to build a corpus containing event coreference links with high quality annotations ie annotations with high interannotator agreement to be useful for machine learning They have created two corpora to assist with a project on automated deep reading of texts One corpus is in the domain of violent events eg bombings killens and wars and the other one containing texts about the lives of famous people In both of these corpora they have annotated a limited number of articles with full and partial coreferences BIBREF186 claims that performing event coreference with high accuracy requires deep understanding of the text and statisticallybased methods both supervised and unsupervised do not perform well He claims that this is the case because because it is absolutely necessary to identify arguments of an event reliably before event coreferences can be found Arguments are difficult to identify because many are implicit and linguistically unexpressed Successful even coreference identification needs determination of spatiotemporal anchoring and locations in time and space are also very often implicit The system he builds uses a linguistically based semantic module which has a number of different submodules which take care of SpatioTemporal Reasoning Discourse Level Anaphora Resolution and determining Topic Hierarchy The coreference algorithm works as follows for each possible referent it check all possible coreference links at first using only the semantic features which are wordform and lemma identity then semantic similarity measured on the basis of a number of similarity criteria which are lexically based The system searches WordNet synsets and assign a score according to whether the possible referents are directly contained in the same synset or not A different score is assigned if their relation can be inferred from the hierarchy Other computational lexical resources they use include FrameNet and Frames hierarchy SumoMilo and its semantic classification After collecting all possible coreferential relations the system filters out those links that are inconsistent or incompatible Argument structure and spatiotemporal relations are computed along with dependence relations temporal logical relations as computed using an adaptation of Allens algorithm The system also computes semantic similarity where high values are preferred The paper does not give any results to support the initial hypothesis although the ideas are interesting BIBREF187 use granularity in computing event coreference The intuition is that an event with a longer duration that happens on a bigger area and with multiple particpants for instance a war between Russia and Ukraine might be related to but will probably not fully corefer with a lower level event of shorter duration and with single participants involved eg A Russian soldier has shot dead a Ukrainian naval officer Coreference between mentions of two events is determined by computing compatibility of contents of event attributes The attributes used are event trigger time location human and nonhuman participant slots BIBREF188 Granularity size is mentioned in terms of durations of event actions BIBREF189 and granularity levels of event participants time and locations Granularity is given in terms of partonomic relations or through the partof relation between entities and events using the taxonomy of meronymic relations by BIBREF190 Granularity levels of the human participant slot are contained within Winstons et al MemberCollection relations The temporal granularity levels make part of Winstons PortionMass relationships and locational levels are in line with PlaceArea relations in Winstons taxonomy Cybulska and Vossen experimented with a decisiontree supervised pairwise binary classifier to determine coreference of pairs of event mentions They also ran experiments with a linear SVM and a multinomial Naive Bayes classifier but the decisiontree classifier outperformed both of them For the experiments Cybulska and Vossen use the ECB dataset BIBREF191 The ECB corpus contains a new corpus component consisting of 502 texts describing different instances of event types They provide results in terms of several metrics recall precision and Fscore MUC BIBREF192 B3 BIBREF184 mentionbased CEAF BIBREF185 BLANC BIBREF193 and CoNLL F1 BIBREF194 and find that the introduction of the granularity concept into similarity computation improves results for every metric Researchers are interested in extracting information from the huge amount of biomedical literature published on a regular basis Of course one aspect of information extraction is event extraction the focus of this paper In the biomedical context an event extraction system tries to extract details of bimolecular interactions among biomedical entities such as proteins and genes and the processes they take part in as described in terms of textual documents Manually annotated corpora are used to train machine learning techniques and evaluate event extraction techniques There have been several workshops on biomedical natural language processing We focus on the BioNLP Shared Tasks in recent years that had competitions on event extraction There have been three BioNLP Shared Task competitions so far 2009 2011 and 2013 The BioNLP 2009 Shared Task BIBREF195 was based on the GENIA corpus BIBREF196 which contains PubMed abstracts of articles on transcription factors in human blood cells There was a second BioNLP Shared Task competition organized in 2011 to measure the advances in approaches and associated results BIBREF197 The third BioNLP ST was held in 2013 We discuss some notable systems from BioNLP ST 2011 and 2013 Before the BioNLP Shared Tasks event extraction in the biomedical domain usually classified each pair of named entities usually protein names cooccurring in the text as interacting or not BioNLP Shared Tasks extended such an approach by adding relations such as direction type and nesting An event defines the type of interaction such as phosphorylation and is usually marked in the text with a trigger word eg phosphorylates describing the interaction This word forms the core of the event description A directed event has roles that have inherent directionality such as cause or theme the agent or target of the biological process In addition events can act as arguments of other events creating complex nested structures For example in the sentence Stat3 phosphorylation is regulated by Vav a phosphorylationevent is the argument of the regulationevent The BioNLP Shared Tasks provide task definitions benchmark data and evaluations and participants compete by developing systems to perform the specified tasks The theme of BioNLPST 2011 was a generalization of the 2009 contest generalized in three ways text types event types and subject domains The 2011 eventrelated tasks were arranged in four tracks GENIA task GE BIBREF197 Epigenetics and Posttranslational Modifications EPI BIBREF198 Infectious Diseases ID BIBREF199 and the Bacteria Track BIBREF200 BIBREF201 Of the four eventrelated shared tasks in BioNLP 2011 the first three were related to event extraction The Genia task was focused on the domain of transcription factors in human blood cell Trascription is a complex but just the first step in the process in which the instructions contained in the DNA in the nucleus of a cell are used to produce proteins that control most life processes Transcription factors are proteins that control the transcription process The EPI task was focused on events related to epigenetics dealing with protein and DNA modifications with 14 new event types including major protein modification types and their reverse reactions Epigenesis refers to the development of a plant or animal from a seed spore or egg through a sequence of steps in which cells differentiate and organs form The EPI task was designed toward pathway extraction and curation of domain databases BIBREF202 BIBREF203 A biological pathway refers to a sequence of actions among molecules in a cell that leads to a certain product or a change in the cell The ID task was focused on extraction of events relevant to biomolecular mechanisms of infectious diseases from full length publications Tasks other than ID focused on abstracts only In this paper we discuss the systems and approaches for only the 2011 GE Task This is because several of the winning systems for the GE Task did well in the other two relevant tasks as well The Genia Task is described in Table 7 The table shows for each event type the primary and secondary arguments to be extracted For example a phosphorylation event is primarily extracted with the protein to be phosphorylated which is the addition of a phosphate group to a protein or other organic molecule As secondary information the specific site to be phosphorylated may be extracted From a computational viewpoint the event types represent different levels of complexity When only primary arguments are considered the first five event types in Table 7 are classified as simple events requiring only unary arguments The binding and regulation types are more complex Binding requires the detection of an arbitrary number of arguments and Regulation requires detection of recursive event structure Consider the sentence In this study we hypothesized that the phosphorylation of TRAF2 inhibits binding to the CD40 cytoplasmic domain Here there are two protein entity names TRAF2 and CD40 The word phosphorylation refers to an event this string is a trigger word Thus the goal of the GE task was to identify a structure like the ones in Tables 8 and 9 In the tables Ti represents a trigger word and Ei represents an event associated with the corresponding trigger word There are three events E1 is the phosphorylation event E2 is the binding event and E3 is the negative regulation event For each trigger word we see the starting and ending character positions in the entire string For each event we see the participants in it The second task identifies an additional site argument Table 10 shows the best results for various tasks in the BioNLP 2011 contests BIBREF197 note an improvement of 10 over the basic GE task in 2011 Task GEa compared to 2009 The results of the GE tasks show that automatic extraction of simple eventsthose with unary arguments eg gene expression localization and phosphorylationcan be achieved at about 70 in Fscore but the extraction of complex events eg binding and regulation is very challenging with only 40 performance level The GE and ID results show that generalization to full papers is possible with just a small loss in performance The results of phosphorylation events in GE and EP are similar GEp vs EPIp which leads BIBREF197 to conclude that removal of the GE domain specificity does not reduce event extraction performance by much EPIc results indicate that there are challenges to extracting similar event types that need to be overcome EPIf results indicate that there are difficult challenges in extracting additional arguments The complexity of the ID task is similar to that of the GE task this shows up in the final results also indicating that it is possible to generalize to new subject domains and new argument entity types Below we provide a brief description of some of the approaches to biomedical event extraction from the BioNLP 2011 contests The team that won the GE Task was the FAUST system BIBREF204 followed by the UMass system BIBREF205 then the UTurku system BIBREF206 The performance of these three systems on the various tasks is given in Table 11 In addition we have the Stanford system in the table because it performed fairly well on the tasks The UMass system BIBREF205 looks at a sentence as having an event structure and then projects it onto a labeled graph See Figure 6 for a target event structure and the projected graph for the sentence fragment Phosphorylation of TRAF2 inhibits binding to CD40 The system searches for a structure that connects the event and its participating entities and imposes certain constraints on the structure Thus the UMass system treats the search for such a structure as an optimization problem To formulate this optimization problem the system represents the structure in terms of a set of binary variables inspired by the work of BIBREF207 BIBREF208 These binary variables are based on the projection of the events to the labeled graph An example of a binary variable is ailr to indicate that between positions i and l in the sentence there is an edge labeled r from a set of possible edge labels R Another such binary variable is tipq that indicates that at position i there is a binding event with arguments p and q Given a number of such variables it is possible to write an objective function to optimize in order to obtain events and entity bindings The system decomposes the biomedical event extraction task into three subtasks a event triggers and outgoing edges on arguments b event triggers and incoming edges on arguments and c and proteinprotein bindings The system obtains an objective function for each of the subtasks It solves the three optimization problems one by one in a loop till no changes take place or up to a certain number of iterations The approach uses optimizing by dual decomposition BIBREF209 BIBREF210 since the dual of the original optimization problem is solved The Stanford system BIBREF211 exploits the observation that event structures bear a close relation to dependency graphs BIBREF212 They cast bimolecular events in terms of these structures which are pseudosyntactic in nature They claim that standard parsing tools such as maximumspanning tree parsers and parse rerankers can be applied to perform event extraction with minimum domain specific training They use an offtheshelf dependency parser MSTParser BIBREF213 BIBREF214 but extend it with eventspecific features Their approach requires conversion to and from dependency trees at the beginning and and at the end The features in the MSTParser are quite local ie able to examine a portion of each event at a time the decoding necessary can be performed globally allowing the dependency parser some tradeoffs Event parsing is performed using three modules 1 anchor detection to identify and label event anchors 2 event parsing to form candidate event structures by linking entries and event anchors and 3 event reranking to select the best candidate event structure First they parse the sentences with a reranking parser BIBREF215 with the biomedical parsing model from BIBREF216 using the set of Stanford dependencies BIBREF217 After the parsing they perform anchor detection using a technique inspired by techniques for named entity recogntion to label each token with an event type or none using a logistic regression classifier The classifier uses features inspired by BIBREF208 They change a parameter to obtain high recall to overgenerate event anchors Multiword event anchors are reduced to their syntactic head The event anchors and the included entities become a reduced sentence input to the event parser Thus the event parser gets words that are believed to directly take part in the events This stage uses the MSTParser with additional event parsing features The dependency trees are decoded and converted back to event structures Finally for event reranking the system gets n best list of event structures from each decoder in the previous step of event parsing The reranker uses global features of an event structure to restore and output the highest scoring structure The reranking approach is based on parse reranking BIBREF218 but is based on features of event structures instead of syntactic constituency structure They use the cvlm estimator BIBREF215 when learning weights for the reranking model Since the reranker can work with outputs of multiple decoders they use it as an ensemble technique as in BIBREF219 The FAUST system BIBREF204 shows that using a straightforward model combination strategy with two competitive systems the UMass system BIBREF205 and the Stanford system BIBREF211 just described can produce a new system with substantially high accuracy The new system uses the framework of stacking BIBREF220 The new system does it by including the predictions of the Stanford system into the UMass system simply as a feature Using this simple model of stacking the FAUST system was able to obtain first place in three tasks out of four where it participated The Turku Event Extraction System BIBREF206 BIBREF221 can be easily adapted to different event schemes following the theme of event generalization in BioNLP 2011 The system took part in eight tasks in BioNLP 2011 and demonstrated the best performance in four of them The Turku system divides event extraction into three main steps i Perform named entity recognition in the sentence ii Predict argument relations between entities and iii Finally separate entityargument sets into individual events The Turku system uses a graph notation with trigger and proteingene entities as nodes and relations eg theme as edges In particular an event in the graph representation is a trigger node along with its outgoing edges The steps are shown in Figure 7 The Turku system uses Support Vector Machines BIBREF222 BIBREF223 at various stages to perform each of the subtasks To use an SVM classifier one needs to convert text into features understood by the classifier The Turku system performs a number of analyses on the sentences to obtain features which are mostly binary The features are categorized into token features eg Porterstem BIBREF224 Penn Treebank partofspeech tags BIBREF102 character bi and trigrams presence of punctuation on numeric characters sentence features eg the number of named entities in the sentence dependency chains up to a depth of three to define the context of the words dependency with n grams joining a token with two flanking dependencies as well as each dependency with two flanking tokens trigger features eg the trigger word a gene or a protein and external features eg Wordnet hypernyms the presence of a word in a list of key terms Applicable combinations of these features are then used by the three steps in event detection trigger detection edge detection and unmerging Trigger words are detected by classifying each token as negative or as one of the positive trigger classes using SVMs Sometimes several triggers overlap in which case a merged class eg phosphorylationregulation is used After trigger prediction triggers of merged classes are split into their component classes Edge detection is used to predict event arguments or triggerless events and relations all of which are defined as edges in the graph representation The edge detector defines one example per direction for each pair of entities in the sentence and uses the SVM classifier to classify the examples as negatives or as belonging to one of the positive classes When edges are predicted between these nodes the result is a merged graph where overlapping events are merged into a single node and its set of outgoing edges To produce the final events these merged nodes need to be pulled apart into valid trigger and argument combinations Unmerging is also performed using the SVM classifier Speculation and negation are detected independently with binary classification of trigger nodes using SVMs The features used are mostly the same as for trigger detection with the addition of a list of speculationrelated words With the explosive expansion of the Internet during the past twenty years the volume of socially generated text has skyrocketed Socially generated text includes blogs and microblogs For example Twitter started in 2006 has become a social phenomenon It allows individuals with accounts to post short messages that are up to 140 characters long Currently more than 340 million tweets are sent out every day While a majority of posts are conversational or not particularly meaningful about 36 of the posts concern topics of mainstream news Twitter has been credited with providing the most current news about many important events before traditional media such as the attacks in Mumbai in November 2008 Twitter also played a prominent role in the unfolding of the troubles in Iran in 2009 subsequent to a disputed election and the socalled Twitter Revolutions in Tunisia and Egypt in 201011 Most early work on event extraction of information from documents found on the Internet has focussed on news articles BIBREF225 BIBREF226 BIBREF227 However as noted earlier social networking sites such as Twitter and Facebook have become important complimentary sources of such information Individual tweets like SMS messages are usually short and selfcontained and therefore are not composed of complex discourse structures as is the case with texts containing narratives However extracting structured representation of events from short or informal texts is also challenging because most tweets are about mundane things without any news value and of interest only to the immediate social network Individual tweets are also very terse without much context or content In addition since Twitter users can talk about any topic it is not clear a priori what event types may be appropriate for extraction The architecture of the system called TwiCal for event extraction BIBREF228 from Twitter messages is given in Figure 8 Given a stream of raw tweets TwiCal extract events with associated named entities and times of occurrence First the tweets are POS tagged using a tagger BIBREF228 especially trained with Twitter data Then named entities are recognized BIBREF229 using a recognizer trained with Twitter data as well After this phrases that mention events or event triggers or event phrases or just events are extracted using supervised learning BIBREF228 annotated 1000 tweets with event phrases following guidelines for annotation of EVENT tags in Timebank BIBREF119 The system recognizes event triggers as a sequence labeling task using Conditional Random Fields BIBREF147 It uses a contextual dictionary orthographic features features based on the Twittertuned POS tagger and dictionaries of event terms gathered from WordNet BIBREF121 Once a large number of events have been extracted by this CRF learner TwiCal categorizes these events into types using an unsupervised approach based on latent variable models inspired by work on modeling selectional preferences BIBREF230 BIBREF231 BIBREF232 BIBREF233 and unsupervised information extraction BIBREF234 BIBREF225 BIBREF235 This automatic discovery of event types is similar to topic modeling where one automatically identifies the extant topics in a corpus of text documents The automatically discovered types topics are quickly inspected by a human effort to filter out incoherent ones and the rest are annotated with informative labels Examples of event types discovered along with top event phrases and top entities are given in Table 12 The resulting set of types are applied to categorize millions of extracted events without the use of any manually annotated examples For inference the system uses collapsed Gibbs sampling BIBREF236 and prediction is performed using a streaming approach to inference BIBREF237 To resolve temporal expressions TwiCal uses TempEx BIBREF238 which takes as input a reference date some text and POS tags and marks temporal expressions with unambiguous calendar references Finally the system measures the strength of association between each named entity and date based on the number of tweets they cooccur in in order to determine if the event is significant Examples of events extracted by TwiCal are given in Table 13 Each event is a 4tuple including a named entity event phrase calendar date and event type The TwiCal system describe above used topic modeling using latent variables as one of the several computational components it is used to capture events captured using supervised learning into types or topics BIBREF239 point out some drawbacks of using such an approach The main problem is that frequently the result generated by Latent Dirichlet Analysis LDA is difficult to interpret because it simply gives a list of words associate with the topic For example when BIBREF239 attempt to find the four most important topics using LDA based on a Twitter collection emanating from Singapore on June 16 2010 they find the topics listed in Table 14 Therefore Weng et al present another approach to detect events from a corpus of Twitter messages Their focus is on detection and therefore not on extraction of components that describe an event Event detection is based on the assumption that when an event is taking place some related words show an increase in usage In this scheme an event is represented by a number of keywords showing a burst in appearance count BIBREF240 BIBREF241 Although it is clear that tweets report events but such reports are usually overwhelmed by high flood of meaningless babbles In addition the algorithms for event detection must be scalable to handle the torrent of Twitter posts The EDCoW Event Detection with Clustering of Waveletbased Signals system builds signals for individual words by applying wavelet analysis on frequencybased raw signals of words occurring in the Twitter posts These signals capture only the bursts in the words appearance The signals are computed efficiently by wavelet analysis BIBREF242 BIBREF243 Wavelets are quickly vanishing oscillating functions and unlike sine and cosine functions used in Discrete Fourier Transformation DFT which are localized in frequency but extend infinitely in time wavelets are localized both in time and frequency Therefore wavelet transformation is able to provide precise measurements about when and to what extent bursts take place in a signal BIBREF239 claim that this makes it a better choice for event detection when building signals for individual words Wavelet transformation converts signals from time domain to timescale domain where scale can be considered the inverse of frequency Such signals also take less space for storage Thus the first thing EDCoW does is convert frequencies over time to wavelets using a sliding window interval It removes trivial words by examining signal autocorrelations The remaining words are then clustered to form events with a modularitybased graph partitioning technique which uses a scalable eigenvalue algorithm It detects events by grouping sets of words with similar patterns of burst To cluster similarities between words need to be computed It does so by using cross correlation which is a common measure of similarity between two signals BIBREF244 Cross correlation is a pairwise operation Cross correlation values among a number of signals can be represented in terms of a correlation matrix mathcal M which happens to be a symmetric sparse matrix of adjacent similarities With this graph setup event detection can be formulated as a graph partitioning problem ie to cut the graph into subgraphs Each subgraph corresponds to an event which contains a set of words with high cross correlation and also that the cross correlation between words in different subgraphs are low The quality of such partitioning is measures using a metric called modularity BIBREF245 BIBREF246 The modularity of a graph is defined as the sum of weights of all the edges that fall within subgraphs after partitioning subtracted by the expected edge weight sum if the edges were placed at random The main computation task in this component is finding the largest eigenvalue and corresponding eigenvector of the sparse symmetric modularity matrix This is solved using power iteration which is able to scale up with the increase in the number of words in the tweets BIBREF247 EDCoW requires each individual event to contain at least two words To differentiate big events from trivial ones EDCoW quantifies the events significance which depends on two factors the number of words and crosscorrelation among the words related to the event To make EDCoW work with TwiCal to see if it improves performance the topic detection module will have to be replaced EDCoW associates fewer words to topics because it filters words away before associating with a topic Table 15 gives a few event words obtained by EDCoW and the corresponding event description Please note that the event description was created by the authors and not the system BIBREF248 use eventbased features to represent sentences and shows that their approach improves the quality of the final summaries compared to a baseline bagofwords approach Event recognition is a core task in questionanswering since the majority of web questions have been found to be relate to events and situations in the world BIBREF121 For example to answer the question How many people were killed in Baghdad in March or Who was the Prime MInister of India in when China and India fought their only war the questionanswering system may have to identify events across a bunch of documents before creating an answer It also seems like when doctors take notes on a patients history or medical record the information is not written in order of events or in temporal order all the time It will be good to take notes from here and there and put them in an event ordered fashion or temporally ordered manner Extracting an event based structure of the medical record would help understand the medical history better Most systems process sentences in isolation like most event extraction systems at the current time Therefore events crossing sentence boundaries cannot be detected </s>
<s>[INST] A Sketch-Based System for Semantic Parsing [/INST] Open domain semantic parsing aims to map natural language utterances to structured meaning representations Recently seq2seq based approaches have achieved promising performance by structureaware networks such as sequencetoactionBIBREF0 and STAMPBIBREF1 However this kind of approach mixes up lowlevel entities predicates and highlevel structures together which loses precision at each level to some extent So the sketchbased method may be an another choice for disentangling highlevel structures from lowlevel details In this work we conduct our sketchbased approach on MSParS a large handannotated semantic dataset mapping questions to logical forms We argue there are at least two advantages to sketchbased method Firstly basic attention based seq2seq networkBIBREF2 BIBREF3 does not perform well in semantic parsing because logical forms are structured sequences and it fails to incorporate structure information of logical forms Then sequencetotreeseq2treeBIBREF4 proposes a structureaware decoder to utilize the information But its architecture also becomes much more complex Instead of using intricate decoders we can extract highlevel sketches for logical forms and classify samples into several sketch classes Logical forms of a certain sketch class have a fixed pattern which is shown in Table TABREF2 So the structure problem is finally simplified to a classification task Secondly logical forms often need to copy a span of questions Although CopynetBIBREF5 and PointerBIBREF6 implement the copy mechanism it is still difficult to achieve the expected effect But for the sketchbased method this problem becomes an individual entity labeling task which is easier than generating entities Generally speaking the seq2seq way decodes the entire meaning representation in one go while we deal with different parts at different levels of granularity just like coarse2fineBIBREF7 Although we increase the number of stages the network architecture of each stage is much simpler without sacrificing the accuracy In this way we are able to locate the errors and optimize according parts We propose to decompose the process into three stages In the first stage we deal with a sketch classification task Then we find the entities in the questions through an entity labeling task Actually we combine the two stages through the multitask model for both accuracy and efficiencyBIBREF8 The last stage is the most difficult part since the knowledge base of MSParS is not available We define question patternlogical form pattern pair and use the matching network to rank all these pairs Seq2seq based approach is one of the two methods we adopted here to help rescore on the whole We also incorporate stateofart pretrained work BertBIBREF9 in above tasks to incorporate more priori knowledge The error rate of our multitask model is lower than 2 which ensures the right sketch and entities So the last stage actually determines the accuracy to a large extent Our accuracy achieves 7742 after above three stages Seq2seq based approach and cooccurrence relationship improve the accuracy to 8686 in validation set Our final accuracy in full test set reaches 8447 And the accuracy on hard test subset has been promoted to 6308 finally which is higher than the best model on the submission list by 565 In the rest of our paper we first analyze the special features of MSParS for this task in section 2 Afterwords we discuss our system in detail in section 3 Then in section 4 we demonstrate our experimental setup results and analyzation Related works are mentioned in section 5 At last we make a conclusion of the whole paper and propose our future work The dataset MSParS is published by NLPCC 2019 evaluation task The whole dataset consists of 81826 samples annotated by native English speakers 80 of them are used as training set 10 of them are used as validation set while the rest is used as test set 3000 hard samples are selected from the test set Metric for this dataset is the exactly matching accuracy on both full test set and hard test subset Each sample is composed of the question the logical form the parametersentityvaluetype and question type as the Table TABREF3 demonstrates Samples are classified to 12 classes originally at a coarse level while we reclassify them at a finer level which is the basis of our sketchbased method We replace the predicate in the triple as Pi the entity in the triple as Ei and distinguish different ones with subscripts The number in superlative class and comparative class is replaced as V while the type in the triple begin with special predicate isa is replaced as T as well In this way we get the sketch of the logical form Finally we produce 15 classes of sketches We believe the features of questions highly correlate with the sketch of logical forms For instance the sketch must begin with argmore or argless if there are comparative words such as higher more and before in questions Therefore we take questions as input to classify samples to different sketch classes As the Table TABREF3 suggests entities are concatenated tokens from the question So we implement entity labeling to label every token in the questions Nonetheless cases are tough when there are more than one entities in the logical form Suppose that we have labeled E1 and E2 from the question We do not know which one we should choose to fill in the first entity slot in the sketch We solve this problem and pick out the suitable predicate simultaneously The entities in the questions are replaced by label entity with subscipts suggesting the order they appear in questions to get question patterns When it comes to logical form patterns the entities in logical forms are substituted as well while predicates are split to small tokens Table TABREF4 gives an example of these two patterns In this way we combine the entity collocations with predicates successfully Another reason for label entity used here is generalization For instance what is birth date for barack obama shares the same question pattern what is birth date for entity1 with what is birth date for donald trump The predicate used in these logical forms is msopeoplepersondateofbirth So we can draw the conclusion that the predicate for this question pattern is likely to be msopeoplepersondateofbirth If what is birth date for george bush appears in the test set we are able to find the right predicate even if we do not see george bush before Without the impact of specific entities our model learns the mapping from question patterns to logical form patterns more accurately Since we do not have a knowledge base we can only extract logical form patterns in training set And we find 9034 of logical form patterns in validation set are covered by that in training set which ensures the feasibility of our method We take question patterns paired with logical form patterns as input Then we get logical form candidates through combining sketches and entities with logical form patterns The ones with higher scores are more likely to be right The single sentence classification finetuned task in Bert is applied in this stage A special classification embedding CLS is added to the beginning We use the final hidden state corresponding to this token as the aggregate sequence representation for classification task denoted as Cs in mathbb Rh so the probability of class ci can be computed as where Ws in mathbb Rks times h and bs in mathbb Rks ks is the number of sketch classes here Ws bs and all the parameters of Bert are finetuned jointly to maximize the log likelihood probability of the correct label We use the single sentence tagging finetuned task in Bert here to label every token in the question whether it is an entity token that appears in the logical form as well To simplify the problem we use 3 labels for the tokens in the questions Label b represents the first token in an entity while label i for the rest ones And label o represents those tokens which are not in any entities Because of the lexical rules in Bert we also label the special token CLS at the beginning of the sentence and the special token SEP at the ending of the sentence as o The last label p is for all the padding tokens added to reach maxlength Besides some tokens in the questions are split into several smaller tokens by Bert For the split ones they are labeled as i if they are in the entities and o otherwise In this stage we use all the final hidden states denoted as D in mathbb Rh times m where m is the maxlength of the input tokens we set The hidden state is mapped into dimension ke via E WeD be where We in mathbb Rke times h and be in mathbb Rke times m ke is the number of labels here We employ the CRF on the top of the network taking E as input representations The objective is to minimize the loss of CRF layer We combine sketch classification and entity labeling to share information together which means sketches of samples can help label entities while the labeled entities can help sketch classification conversely The architecture of our model is shown in Fig FIGREF9 where the parameters of Bert model is finetuned together for two tasks Since the scale of dataset is large we can save lots of time through multitask model instead of training two different models Finally it contributes to both accuracy and efficiency In this way our loss to minimize is the weighted sum of the crossentropy loss in sketch classification task and the CRF loss in entity labeling task Besides the single sentence tasks Bert provides sentence pair classification tasks as well We implement the matching network taking question patterns and logical form patterns as input The right pattern pairs are regarded as positive samples We select negative samples only from the logical form patterns in the same sketch class for fixed question patterns The sketch mentioned is from the multitask model Just like sketch classification we denote the final hidden state corresponding to token CLS as Cp in mathbb Rh so the probability can be computed as where Wp in mathbb R2 times h bp in mathbb R2 and cj in lbrace 0 1rbrace Wp bp and all the parameters of bert are finetuned jointly to maximize the log likelihood probability of the correct class In the prediction stage the candidates for a question pattern are from logical form patterns in the same sketch class as well The probabilities of class 1 are scores we get for these pattern pairs From logical form patterns we get not only right predicates but right orders as well in which entities should appear So with the sketch and entities we aquire in the multitask model we can already generate complete logical form candidates with scores between 0 and 1 To alleviate the absence of knowledge base we incorporate the cooccurrence relationship between predicates and entities to evaluate the candidates We create the second matching network based on Bert as well This time the pairs we take as input are predicateentity ones We label the predicateentity pair as 1 if they have ever appeared in one triple in training set For a certain entity we select predicates that never appear with this entity as negetive samples In the prediction stage we score the predicateentity pairs in logical form candidates However this network does not take questions into account The predicate for a certain entity can differ a lot according to various questions For instance the predicate for what is birth date for barack obama is apparently different from that for what is birth place for barack obama But the entity barack obama has only one predicate with highest score Although this matching network only considers the cooccurrence relationship regardless of the information from questions scores produced by it do work as an auxiliary Although it is not easy for a complex network to generate the whole logical form such networks do reflect the mapping from an overall perspective So we adopt PointerBIBREF6 here to rerank We take the questions as input For logical forms entities and predicates are composed of words concatenated by or In order to utilize the information of words we split all entities and predicates and take split logical form candidates as output For a fixed question we calculate crossentropy losses of different pairs with split logical forms Then every loss is divided by the max one and subtracted by 1 to be normalized between 0 and 1 The higher the score is the more the logical form candidate is likely to be true A linear combination of the three intermediate scores from pattern pair matching network predicateentity pair matching network and Pointer is used to rerank logical form candidates Weights are roughly adjusted in validation set The number of sketch classes is 15 and the number of labels is 4 in the multitask model The Bert model we applied is BERTBase Uncased with 12layer 768hidden 12heads and 110M parameters All the parameters are finetuned in validation set In the multitask model we train the model for 10 epoches We set batch size to 32 and learning rate to 2e5 The weight of the loss in sketch classification is 1 while that in entity labeling is 2 We train 3 models in pattern pair matching network with different epoches As for predicateentity pair matching network the number of epoch we use is 3 In Pointer word embeddings were initialized by GloveBIBREF10 The hidden dim of LSTM is set to 256 More details will be released in our source codes later Because of the instability of the performance of neural network over training epoches ensemble learning is incorporated both in pattern pair matching network and Pointer Scores of Pointer is the simple average of scores from 3 models with different epoches When it comes to pattern pair matching net it is a little complex We make a prediction for training set with our best model We apply ranking sampling here From those labeled as 0 but with probabilities larger than 00001 we select 20 of them while 5 of those whose probabilities are smaller than 00001 as new negative samples We train new models with new training data resampled before every epoch based on one best model and base model of Bert After several epoches we average the probabilities of new models and original models for ensemble We demonstrate the detailed performance in Table TABREF16 All samples are classified into 15 classes We show the results for every class and the overall weighted average preformance in validation set Because the complete test set is not open yet we only provide the overall results in test set returned after submission It can be seen the overall error rate of our multitask model is only 193 which means this task is successful In sketch classification Errs scores of all classes are lower than 1 except multiturnanswer Its recall is 10000 while its precision is 9138 092 of samples in multiturnentity are misclassified to multiturnanswer in validation set We find there are separator in logical forms from three classes of multiturn questions Multiturnpredicate questions have two different entities while both multiturnentity and multiturnanswer questions have only one This kind of entity information is passed to sketch classification through shared parameters So our system makes some mistakes while distinguishing multiturnentity samples from multiturnanswer samples As for entity labeling the overall error rate is 172 We check the wrong samples and find our model is not so good at recognizing entity boundaries especially while encountering some special tokens such as articles genitive s and quotation mark Actually it is not easy for human to define an entity in these cases as well At last Errf of our best model is 1314 in validation set 1553 in full test set and 3692 in hard test subset We inspect the output of our model in order to identify the causes of errors The entity error takes up 2043 not only because of wrong entities but also right entities in wrong order 7957 of incorrect samples have wrong predicates although their entities are right Our accuracy is extremely low for multichoice We look into this class and find 5072 of errors are because of right entities with wrong order Actually there are three different entities in sketch of multichoice class and two of them are semantically exchangeable in the form or equal x E1 equal x E2 So it is not easy for our pattern pair matching network to deal with this problem In the meantime our model achieves error rate of 0 for 3 classes in validation set Our system is compared with that of other teams in NLPCC 2019 Shared Task 2 The top 4 results are shown in Table TABREF17 Our system on the submission list is WLIS which achieves the 3rd place After some optimizations for parameters seq2seq network structure and sampling the performance of our new system has been improved a lot The accuracy of our new baseline reaches 7742 By incorporating two auxiliary scores the accuracy is improved to 8686 in validation set Accuracy achieves 8447 in full test set and 6308 in hard test subset Our accuracy in full test set supasses the 2nd place but is still lower than the 1st place by 121 while the accuracy on hard subset is higher than that of the 1st place by 565 Semantic parsing is a longstanding problem in NLP mapping natural language utterances to logical formsBIBREF11 BIBREF12 BIBREF13 BIBREF14 BIBREF15 BIBREF16 Since it is not easy for semantic parsing to label data manually reinforcement learningBIBREF17 and transferBIBREF18 BIBREF19 are applied when data is not enough But in most cases we are studying how to improve the results when enough data is available for supervised learning Basic seq2seq networkBIBREF20 enables the model to be trained in an endtoend mode Later structureaware models are designed to generate logical forms more elaborately Seq2treeBIBREF4 is equipped with a treestructured decoder to parse hierarchical logical forms while STAMPBIBREF1 adopts a switching gate in the decoder to control the generation of SQL The models mentioned above all generate the whole logical form in one go There are also some works that applied sketchbased approach to solve the problem It has already been explored in the field of program synthesisBIBREF21 Coarse2fineBIBREF7 decomposes the decoding process to 2 stages Sketches are generated in the first stage while model in the second stage fills in missing details SQL generating is especially suitable for this method because of its easy sketches SqlnetBIBREF22 divides the task into 6 subtasks to generate different part of SQL SQLovaBIBREF23 also inherits this idea and incorporate BertBIBREF9 in his model The idea of our system is similar to that of SQLova We do not use complex decoders to make our network structureaware The architectures of models are easy in every stage We first determine sketches as the highlevel structure Lowlevel details are added in later stages The losses of seq2seq network is applied here to rerank from an overall perspective So we actually combine both seq2seq method and sketchbased method to some extent In this paper we presented a sketchbased system for semantic parsing which disentangles highlevel structures from lowlevel details Due to the absence of knowledge base we propose to collect question patterns and logical form patterns to capture the implicit relationship between questions and predicates which can then be used to perform reranking in a Pointer network within a seq2seq framework Our previous submitted system achieves the 3rd place while our new system outperforms the 1st place for accuracy in hard test subset Since the knowledge base will be released later in future work we would like to incorporate new knowledge to improve our system We will extend our system to other semantic parsing tasks as well This work is supported in part by the NSFC Grant No61672057 61672058 61872294 the National HiTech RD Program of China No 2018YFB1005100 For any correspondence please contact Yansong Feng </s>
<s>[INST] Progressive Joint Modeling in Unsupervised Single-channel Overlapped Speech Recognition [/INST] The cocktail party problem BIBREF0 BIBREF1 referring to multitalker overlapped speech recognition is critical to enable automatic speech recognition ASR scenarios such as automatic meeting transcription automatic captioning for audiovideo recordings and multiparty humanmachine interactions where overlapping speech is commonly observed and all streams need to be transcribed The problem is still one of the hardest problems in ASR despite encouraging progresses BIBREF2 BIBREF3 BIBREF4 BIBREF5 In this paper we address the speech recognition problem when multiple people speak at the same time and only a single channel of overlapped speech is available This is useful when only a single microphone is present or when microphone array based algorithms fail to perfectly separate the speech Specifically the paper focuses on an unsupervised inference method which does not need any prior knowledge of speakers To obtain transcriptions of all speakers from the overlapped speech joint inference is conducted based on multiple knowledge sources frequency domain voice discrimination temporal speaker tracing linguistic information and speech recognition Prior work in unsupervised singlechannel overlapped speech recognition generally separates the problem into speech separation and recognition stages Before the deep learning era the most popular speech separation technique is computational auditory scene analysis CASA BIBREF2 There are two main stages in CASA approaches segmentation and grouping The segmentation stage decomposes mixed speech into timefrequency segments assumed to be derived from the corresponding speakers based on perceptual grouping cues BIBREF6 The grouping stage simultaneously and sequentially concatenates the segments to generate independent streams for each speaker Nonnegative matrix factorization NMF BIBREF7 is another popular technique which aims to learn a set of nonnegative bases that can be used to estimate mixing factors during evaluation Recently several deep learning based techniques have been proposed but seldom concentrate on the unsupervised case which is more applicable In BIBREF8 BIBREF9 BIBREF10 the authors propose deep clustering DPCL in which a deep network is trained to produce spectrogram embeddings that are discriminative for partition labels given in training data The model is optimized so that in the neural network embedding space the timefrequency bins belonging to the same speaker are closer and those of different speakers are farther away Speech segmentations are therefore implicitly encoded in the embeddings and can be obtained by clustering algorithm In BIBREF11 a DNNbased gender mixture detection system and three genderdependent speech separation systems are constructed The latter ones directly infer the feature streams of two speakers respectively For all these methods speech separation and recognition are two separate components and the latter is applied to the separated feature streams The mismatched feature in the speech recognition stage is one of the limitation in these methods In light of permutation invariant training PIT proposed in speech separation BIBREF12 originally the PITASR model BIBREF13 is the first attempt in joint modeling of unsupervised singlechannel mixed speech recognition Whereas the original PIT technique jointly models the voice discrimination and speaker tracing PITASR further integrates speech recognition into the neural network with a unified cross entropy CE criterion Although PITASR shows promising results it suffers from several disadvantages which are analyzed in Section Unsupervised Singlechannel Overlapped Speech Recognition In this paper progressive joint modeling is proposed to divide the single channel overlapped speech recognition problem into three subproblems for initialization framewise interpreting speaker tracing and speech recognition Figure 1 Each module is initialized by placing it into a series of networks that solve progressively more difficult problems After the initialization modules are jointly trained with two novel strategies namely selftransfer learning and multioutput sequence discriminative training Transfer learning is introduced in this problem which leverages parallel clean speech to improve the training targets for the network Our discriminative training formulation is a modification of standard formulations that also penalizes competing outputs of the system The proposed framework achieves 30 relative improvement over both a strong jointly trained system PITASR and a separately optimized system PIT for speech separation with clean speech ASR The improvement comes from better model generalization training efficiency and the sequence level linguistic knowledge integration The rest of the paper is organized as follows In Section Unsupervised Singlechannel Overlapped Speech Recognition the unsupervised singlechannel overlapped speech recognition problem is briefly reviewed In Section Modularization the modular initialization and progressive joint training is proposed In Section Unsupervised Singlechannel Overlapped Speech Recognition0 the selftransfer learning is proposed and in Section Multioutput Sequence Discriminative Training multioutput sequence discriminative training is proposed In Section Experiment the experimental results are reported in artificial overlapped Switchboard corpus and Eval2000 hub5eswb test set followed by the conclusion in Section Conclusion Unsupervised singlechannel overlapped speech recognition refers to the speech recognition problem when multiple unseen talkers speak at the same time and only a single channel of overlapped speech is available Different from supervised mode theres not any prior knowledge of speakers in the evaluation stage In the problem only the linearly overlapped singlechannel signal is known which is defined as beginsplit
mathbf Oumsum n1N mathbf Ounr
endsplit Eq 4 where mathbf Ounr is the clean signal stream of speaker n at utterance u and mathbf Oum is the overlapped speech stream of utterance u N is the number of streams Single channel ASR is always formulated as a supervised sequence labeling problem given by Pmathbf Lumathbf Ou which is the posterior probability of transcription sequence mathbf Lu given the feature sequence mathbf Ou in the utterance u Nevertheless the multispeaker problem is to model the joint distribution of n0 streams of transcriptions given the overlapped speech signal n1 Due to the symmetric labels given the mixture signals it is no longer a supervised optimization problem One branch of methods assumes the inference streams are conditionally independent and tries to assign the correct transcription stream n2 to the corresponding output stream n3 beginsplit
Pmathbf Lu1mathbf LuNmathbf Oum approx prod n1N Pmathbf Lunrmathbf Oum
endsplit Eq 5 Another branch of methods assume the overlapped signal can be separated to mathbf Ouncmathop mathbf Ouncapprox mathbf Ounr Because the original streams mathbf Ounr are conditionally independent with each other the separated signal streams mathbf Ounc are also assumed to be conditionally independent Thus Equation 5 can be derived to Equation 6 beginsplit
Pmathbf Lu1mathbf LuNmathbf Oum approx prod n1N Pmathbf Lunrmathbf Ounc
endsplit Eq 6 However neither assumption is precise For the first assumption there is no predetermined method to obtain the ideal label arrangements which is called the speaker tracing problem The second assumption is that the speech separation and recognition are independent processes which introduces an artificial information bottleneck In BIBREF12 the reference streams are treated as an unordered set The PIT framework is proposed to address the speech separation problem by firstly determining the assignment of the reference stream and inference stream that minimizes the error at the utterance level based on the forwardpass result This is followed by minimizing the error given the utterance level best assignment BIBREF13 extends this by integrating speech recognition into the neural network with a unified crossentropy CE training criterion beginsplit
mathcal JtextCEPITsum u min sprime in mathbf S sum t frac1N sum nin 1N CElutnsprime lutnr
endsplit Eq 7 Here mathbf S is the permutation set of the reference representation and the inference representation lutnsprime is the n th inference label of permutation sprime at frame t in utterance u and lutnr is the corresponding transcription label obtained by clean speech forcedalignment BIBREF14 The PITASR criterion BIBREF13 elegantly integrates speech separation speaker tracing and speech recognition together as Figure 2 a Its joint modeling approach eliminates the artificial bottleneck between the speech separation and speech recognition tasks But the method suffers from several disadvantages which deteriorates the performance In this work we propose three separate enhancements to improve the performance of PITASR First the structure and accuracy of the model is improved through modularization and pretraining Framewise interpreting speaker tracing and speech recognition modules replace the monolithic structures used in previous work These modules are progressively pretrained and jointly finetuned Second we demonstrate a natural way to incorporate a form of transfer learning Clean speech features are used to generate soft label targets which are interpolated with the reference label alignments Third multioutput discriminative training is applied to the system As with singlestream speech recognition multistream discriminative training can help with model generalization Additionally the objective function is augmented to reduce crossspeaker word assignment errors In the original formulation a PITASR model consists of a single monolithic structure that predicts independent targets for each speaker We improve this by replacing the main network structure with a modular structure shown in Figure 1 This modular structure consists of three tasks namely interpreting mixed acoustic data tracing speakers across time and predicting acoustic label sequences First the framewise module is designed to extract the local timefrequency information necessary to separate the overlapped speech into individual acoustic representations It is entirely local and does not depend on sequencelevel information Second the speaker tracing module accepts framewise acoustic representations from the framewise module and traces the speaker information This process concatenates adjacent acoustic representations of the same speaker together to infer the recovered speech features of each speaker Third the speech recognition modules accept the sequences of recovered acoustic features from each speaker and produce sequences of label scores suitable for use in an automatic speech recognition system Because each speech recognition module performs the same task it is natural to share the parameters of this module across each instance in the final model Although it is possible to train the modularized network of Figure 2 e from random initialization it is better to use a progressive training strategy This strategy is motivated by the Curriculum learning theory in BIBREF15 and integrates both modular initialization and joint training We train a simple model first and then use it as a pretrained building block for a more complicated model and task Figures 2 be illustrate how the model becomes progressively more complex while solving more difficult problems from framewise mean squared error to whole utterance cross entropy Our simplest model shown in Figure 2 b is trained to solve a framewise speech separation task For N speakers given the mixed data mathbf Oum the model infers an acoustic representation outn for each speaker n at frame t of utterance u The objective function of the framewise training is given as beginsplit
mathcal JtextFPITsum usum t frac1Nmin sprime in mathbf S sum nin 1N MSEoutnsprime outnr
endsplit Eq 12 where mathbf S is the permutation set of the reference representation and the inference representation outnsprime and outnr is the frame level acoustic representation of permutation sprime and the reference clean speech respectively In each frame t of the utterance u the overall minimum square error MSE is obtained by comparing all the reference and inference representations of each permutation sprime The architecture for pretraining the speaker tracing module is explained in Figure 2 c The tracing module is combined with a pretrained framewise module that has had its N output layers removed As in BIBREF12 the PIT objective function is applied in utterance level beginsplit
mathcal JtextUPITsum u min sprime in mathbf S sum t frac1N sum nin 1N MSEoutnsprime outnr
endsplit Eq 13 The speech recognition module is separately pretrained in the same way as a conventional acoustic model with clean speech and a crossentropy objective function maximizing pmathbf Lumathbf Ou This is illustrated in Figure 2 d The final model shown in Figure 2 e is created by stacking the speech recognition modules onto the outputs of the pretrained speaker tracing and framewise modules It is jointly trained with an utterance level PITCE objective function given in Equation 7 Notably even though the proposed structure has several copies of the speech recognition module the numbers of parameters between Figure 2 a and Figure 2 e are similar Because the speech recognition modules are solving similar problems their parameters can be shared In preliminary experiments the performance gap between sharing and independent parameters is less than 3 Thus to make the comparison fair without specific explanation the shared structure is used The advantage of the progressive joint training includes Decreased model complexity leading to better system generalization and data efficiency By separating system into proper modules the model complexity is less than the allinone system in PITASR As unsupervised singlechannel overlapped speech recognition contains several of the hardest components in speech processing the model complexity reduction is critical to the empirical training performance Faster convergence of the training process and better quality of the local minimum Curriculum learning improves both the training speed and the performance of the model BIBREF15 As shown in Figure 4 the progressive joint training needs fewer epochs to converge and it converges to a better local minimum An additional benefit is that the module initializations all take much less time compared with the joint training Potential to integrate with other technologies Stateoftheart technologies in each field can be applied to the initialization of each module Transfer learning or teacherstudent training is a powerful technique to address domain adaptation problems in speech recognition In this section we show that multichannel ASR is a type of domain adaptation and that transfer learning can be used to improve model training Transfer learning has been proposed to solve the distribution mismatch problem in feature space BIBREF16 To use this method in singleoutput ASR domain adaptation parallel data must be available from a source domain and a target domain A fully trained model in the source domain the teacher processes data and generates posterior probabilities which are sometimes referred to as soft labels These soft labels then replace or augment the usual hard labels when training the student model with parallel data in the target domain BIBREF17 To train the student the KullbackLeibler divergence KLD between the output distributions of the teacher and student models is minimized as below beginsplit
KLDyTySsum i yiT log fracyiTyiS
sum i yiT log yiT yiTlog yiS
endsplit Eq 22 beginsplit
sum i yiTlog yiS
endsplit Eq 23 where yiT and yiS is the teacher and student distributions respectively Because the first term is not related to the student model optimization only the second term is used for optimization Comparing Equation 23 to CE criterion in ASR the hard labeling is replaced by the soft distribution inferred from the source data by the teacher model In light of above discussion selftransfer learning can be extended to the training of any multichannel speech recognition system The student is of course the multichannel speech recognition system It operates in the target domain of mixed speech acoustic data and must produce separate outputs for each speaker in the mixture The teacher also must produce separate outputs for each speaker but has access to the source domain unmixed clean speech The teacher model is a set of clean speech acoustic models operating independently on the separate channels of clean speech The selftransfer learning method then minimizes the KLD between the output distribution of the mixed speech model and the set of clean speech models The KL divergence defined for utterance level PIT training between the clean speech model distribution and the joint model distribution is as below beginsplit
mathcal JtextKLDPITsum u min sprime in mathbf S sum t frac1N sum nin 1N
KLDPlutncmathbf OunrPlutnsprime mathbf Oum
endsplit Eq 25 where the calculation of each KLDcdot pair is the same to the adaptationpurpose singlechannel case in Equation 23 Namely the jointtrained model distribution ySPlutnsprime mathbf Oum is taken as the student model distribution and the clean speech model distribution yTPlutncmathbf Ounr is taken as the teacher model distribution It is notable that when this method is applied to the modular structure proposed in this work as in Figure 3 the speech recognition modules can be initialized with an exact copy of the teacher model The training framework for selftransfer learning is shown in Figure 3 The soft targets generated by the teacher models are interpolated with the hard labeling as in BIBREF18 The training procedure is as below Clone the speaker tracing layers in the bottom Clone 2N copies of clean ASR model initialized in Section Modularization half for stacking upon the speaker tracing layers half for model inference given each clean speech stream Use simultaneous clean speech streams mathbf Ounr and the overlapped speech stream mathbf Oum to do joint training For each minibatch do forward propagation of the clean ASR model using each clean speech stream to calculate N streams of Plutncmathbf Ounr respectively Do forward propagation of the joint model using overlapped speech stream to calculate N streams of inference distributions Plutnsprime mathbf Oum For that minibatch calculate the error signal of Equation 25 and then do back propagation for the joint model Update parameters of the joint model and repeat until convergence The proposed method elegantly solves the label mismatch problem and helps the model convergence Namely using hard labeling obtained from forcedalignment in the clean speech is not proper because the feature has been distorted in the mixed speech The proposed method replaces it with the soft distribution In addition the proposed method formulates the joint training of multichannel ASR by domain adaptation between clean speech and overlapped speech Thus the soft distribution also helps model convergence because its easier to recreate its performance compared with training a speech recognition model from scratch The evidence can be observed from the training curve in Figure 4 that the initial CE of selftransfer learning based progressive joint training is much better than that of both joint modeling and progressive joint modeling Notably the different starting points between the progressive joint modeling and selftransfer learning based progressive joint modeling is because the CE in the former system is calculated versus hard labeling while for the latter system it is versus the soft distribution inferred from simultaneous clean speech Thus with a better starting point and less parameter updating requirement finally the model also comes into better minimum in the figure The relationships of the proposed method and previous works are summarized as below leftmargin Model space adaptation The formulation of the proposed method is similar to KLDbased adaptation BIBREF20 and teacherstudent based domain adaptation BIBREF17 In BIBREF20 to conservatively update model parameters using adaptation data the target probability distribution is changed from the ground truth alignment to a linear interpolation with the distribution estimated from the unadapted model In BIBREF17 the feature mismatch problem in the target domain is solved by minimizing the inference distribution divergence between the target and source domains using paralleldata The reader is free to consider the proposed method as analogous to optimizing student network in the target domain ie overlapped speech to behave similarly to the welltrained teacher network in the source domain ie clean speech while bearing in mind the proposed method requires module stacking because the motivation is to do joint training Stereo piecewise linear compensation for environment SPLICE BIBREF21 The SPLICE algorithm uses stereo data to do noise reduction and channel distortion compensation In BIBREF22 the clean feature is used for the teacher model to provide supervision on the stereo noisy data trained student model In BIBREF18 the multichannel enhanced feature is used for the teacher model In this work the teacherstudent framework is also based on stereo data The student model is initialized better to cope with the more difficult modeling problem and the entire framework is expanded to handle multiple output streams Progressive stacking transfer learning BIBREF23 proposes to progressively conduct transfer learning to train speech enhancement layers The motivation of the progressive stacking is only to gradually model a hard task by dividing into several same but smaller tasks Thus the criteria of all tasks are the same However the proposed method is to do joint training of distinct tasks And each task is fully trained with specific data and criterion Selfsupervised training BIBREF24 proposes to use a teacher model based on more accurate sensor information as the supervision of the student model The motivation of the proposed method is different which is to transfer distribution between two models with feature mismatch Besides the empirical procedure of the proposed method is to finetune the original model in the target feature distribution from supervision of itself in parallel source distribution Following the transfer learning diagram the joint model can also benefit from an ensemble of teachers BIBREF25 Specifically because the problem includes several subproblems discussed in Section Modularization different neural network structures can show different superiorities eg with different numbers of streamdependent layers and streamindependent layers Learning from an ensemble of joint models with different structures is promising in both performance improvement and model compression Speech recognition is inherently a sequence prediction problem In singleoutput ASR sequence level criteria such as sequence discriminative training tend to improve performance The unsupervised singlechannel overlapped speech recognition problem further includes the speaker tracing problem which is also a sequence level problem Previous works concatenate frame level CE as the criteria of the sequence level problem which limits the modeling effect of the neural network In this paper sequence discriminative training of multiple output streams is proposed for the first time In singleoutput ASR to form a sequence discriminative training criterion it is necessary to calculate the sequence posterior probability using Bayes theorem as below beginsplit
Pmathbf Lumathbf Oufracpmathbf Oumathbf LuPmathbf Lupmathbf Ou
endsplit Eq 41 Here mathbf Lu is the word sequence of utterance u Pmathbf Lu is the language model probability pmathbf Oumathbf Lu is the corresponding acoustic part The marginal probability pmathbf Ou of the feature sequence mathbf Ou is modeled by summation of the probability over all possible hypothesis sequences beginsplit
pmathbf Ousum mathbf L pmathbf Oumathbf L sum mathbf L Pmathbf L pmathbf Oumathbf L
endsplit Eq 42 Here mathbf L denotes all competing hypotheses As an example of the sequence discriminative training criteria the maximum mutual information MMI BIBREF26 of inference distribution stream mathbf Lu in utterance u is defined as below beginsplit
mathcal JtextSEQmathbf Lumathbf Lur log Pmathbf Lurmathbf Ou
endsplit Eq 43 where mathbf Lur is the corresponding reference For the overlapped speech recognition problem the conditional independence assumption in the output label streams is still made as in Equation 5 Then the crossentropy based PIT can be transformed to sequence discriminative criterion based PIT as below beginsplit
mathcal JtextSEQPITsum u min sprime in mathbf S frac1N sum nin 1Nmathcal JtextSEQmathbf Lunsprime mathbf Lunr
endsplit Eq 44 Different from Equation 7 the best permutation is decided by mathcal JtextSEQmathbf Lunsprime mathbf Lunr which is the sequence discriminative criterion of taking the sprime th permutation in n th output inference stream at utterance u Similar to CEPIT mathcal JtextSEQ of all the permutations are calculated and the minimum permutation is taken to do the optimization The definition of mathcal JtextSEQmathbf Lunsprime mathbf Lunr is similar to Equation 43 in single output ASR beginsplit
mathcal Jtt MMI
sum u mathcal JtextSEQmathbf Lunsprime mathbf Lunr
sum u log fracsum mathbf Lu pmathbf Oumathbf Lukappa Pmathbf Lusum mathbf L pmathbf Oumathbf Lkappa Pmathbf L
endsplit Eq 45 For simplicity mathbf Lumathbf Lunr and mathbf L is all the hypothesis sequences generated by the output stream mathbf Lunsprime pmathbf Oumathbf Lu and pmathbf Oumathbf L is the conditional likelihood obtained from forward propagation of the joint model Pmathbf Lu and Pmathbf L is the prior probability obtained from language model The hypothesis sequence mathbf L refers to all the competing hypotheses in the speech recognition Bad modeling of mathbf L namely ignoring some modeling errors results in imprecise estimation of Equation 42 which hurts ASR performance Thus competing hypotheses modeling is key to the discriminative training Empirically it is constrained by linguistic search space In the single output ASR the linguistic search space is further pruned by the online method latticebased discriminative training BIBREF27 or by the offline method latticefree LF discriminative training BIBREF28 BIBREF29 BIBREF30 Compared with single output ASR the additional error hypothesis types include i Assignment errors someone said a word but it is assigned to the wrong channel ii cross talk errors one person said a word but it appears in multiple channels They both come from imperfect acoustic modeling and result in several challenges in discriminative training The first problem is linguistic search space modeling As discussed in the first paragraph there are mainly two branches of methods When training the joint model speaker tracing results can always change which results in different permutations of the same utterance between different epochs Thus if using latticebased method lattices should be updated after each epoch in case of bias in the search space modeling Another choice is to use a prepruned senone level language model as the common search space for all utterances BIBREF30 With this method the lattice generation problem in multiple outputs can be solved and the discriminative training can be conducted efficiently in the shared search space The second problem is the swapped word modeling in multiple outputs Swapped word results in both cross talk errors and assignment errors Thus bad swapped word modeling hurts the ASR performance Generally the linguistic search space is estimated from the transcription of the training dataset And then sequence criterion is calculated in this search space Because theres no swapped word phenomenon in the transcription thus the search space doesnt contain swapped word cases which results in overestimating the sequence criterion Especially for the multiple output streams the swapped word errors are critical to the ASR performance Three methods are proposed to cope with the problem leftmargin Artificial swapped words A very simple method is to generate several copies of the transcription with artificially swapped words in each copy And then the language model is estimated on the regenerated transcription With this method some of the swapped word cases can still exist in the search space Thus the problem is alleviated Empirically the senone level language model is obtained from the senone level transcription namely clustered triphone state alignment In case of significant increasing in the search space because of the swapped word we set a rule that in each frame the probability of senone swapping is alpha But if the senone is swapped the senone sequence of the following beta frames wont be swapped And gamma copies of the transcriptions are generated Decorrelated lattice free MMI LFDCMMI The motivation is that swapped words come from the other output streams Thus adding these output streams into the search space and minimizing them in the denominator of discriminative training can alleviate the problem beginsplit
mathcal Jtt LFtextDCtextMMI
sum u log fracsum mathbf Lu pmathbf Oumathbf Lukappa Pmathbf Lu sum mathbf L pmathbf Oumathbf Lkappa Pmathbf L 1lambda
cdot
frac1 sum mathbf Lhatu pmathbf Oumathbf Lhatukappa Pmathbf Lhatu lambda
endsplit Eq 49 In Equation 49 the other output streams are denoted as mathbf Lhatu An interpolation weight lambda is added with the augmented term in the denominator Decorrelated lattice free boosted MMI LFDCbMMI Analogous to boosted MMI BIBREF31 as Equation 51 beginsplit
mathcal Jtt LFtextbMMI
sum u log fracsum mathbf Lu pmathbf Oumathbf Lukappa Pmathbf Lusum mathbf L pmathbf Oumathbf Lkappa Pmathbf Leb mathop max mathbf Lu Amathbf Lmathbf Lu
endsplit Eq 51 we propose decorrelated lattice free boosted MMI LFDCbMMI as Equation 52 Here b is the boosting factor Amathbf Lmathbf Lu is the state level accuracy between sequence mathbf L and mathbf Lu By this method the ASR error hypotheses can be further minimized in the denominator In the proposed method both the ASR errors between the target inference sequence and the target reference and the falselyrecognition of the interfere streams are boosted beginsplit
mathcal Jtt LFtextDCtextbMMI
sum u log sum mathbf Lu pmathbf Oumathbf Lukappa Pmathbf Lucdot
frac1sum mathbf L pmathbf Oumathbf Lkappa Pmathbf Leb mathop max mathbf Lu Amathbf Lmathbf Lu
hatb mathop max mathbf Lhatu
1Amathbf Lmathbf Lhatu
endsplit Eq 52 where hatb is the decorrelated boosting factor and Amathbf Lmathbf Lhatu measures how many falsely recognitions of the interfere streams Experiments are conducted on all three methods in Section Sequence Discriminative Training and the first method can be further combined with the other two The experimental results are reported in artificial overlapped Switchboard corpus and Eval2000 hub5eswb test set Although the methods presented here are valid for any number of overlapped speakers we focus on the twotalker scenario For training the Switchboard corpus BIBREF32 is used which contains about 300 hours of speech Evaluation is carried out on the Switchboard SWB subset of the NIST 2000 CTS hub5eswb test set The waveforms were segmented according to the NIST partitioned evaluation map PEM file Twotalker overlapped speech is artificially generated by mixing these waveform segments To maximize the speech overlap we developed a procedure to mix similarly sized segments at around 0dB First we sort the speech segments by length Then we take segments in pairs zeropadding the shorter segment so both have the same length These pairs are then mixed together to create the overlapped speech data The overlapping procedure is similar to BIBREF13 except that we make no modification to the signal levels before mixing After overlapping theres 150 hours data in the training called 150 hours dataset and 915 utterances in the test set After decoding there are 1830 utterances for evaluation and the shortest utterance in the hub5eswb dataset is discarded Additionally we define a small training set the 50 hours dataset as a random 50 hour subset of the 150 hours dataset Results are reported using both datasets In the training stage 80dimensional logfilterbank features were extracted every 10 milliseconds using a 25millisecond analysis window The convolution neural network CNN models use 41 context frames 20 in both left and right and the long short term memory networks LSTM processed one frame of input at a time All neural networks were trained with the Microsoft Cognitive Toolkit CNTK BIBREF33 The detailed setup of CNN is listed in Section Separate Optimization vs Joint Modeling The acoustic model is based on three state lefttoright triphone models with 9000 tied states senones The individual senone alignments for the twotalkers in each mixed speech utterance are from the singlespeaker ASR alignment BIBREF30 For compatibility the alignment of the shorter utterance within the mixed speech is padded with the silence state at the front and the end The clean speech recognition performance in the corpus can be referred to BIBREF30 BIBREF29 Using clean speech model to do decoding in the overlapped speech isnt reported as its as bad as in BIBREF13 The baseline model of joint training is a PITASR model with a setup similar to BIBREF13 The PITASR model is composed of 10 bidirectional LSTM layers with 768 memory cells in each layer and 80dimensional feature The baseline model of separately optimized system is a PIT for speech separation PITSS model combined with a clean speech ASR model As PITSS model has shown competitive performance compared with other speech separation systems only PITSS model is taken as the baseline The PITSS model has a setup similar to BIBREF12 but with 6 bidirectional LSTM layers with 768 memory cells in each layer it directly outputs multiple channels of the 80 dimensional logfilterbank features the speech recognition module expects The speech recognition module pretrained as a clean speech model is composed of 4 bidirectional LSTM layers with 768 memory cells in each layer It is trained from the corresponding source speech segments used to create the overlapped corpus After initialization the WER performance of the ASR model in the clean speech test set is 170 Although it would be easy to incorporate a stronger acoustic model BIBREF30 in conjunction with the proposed method we chose a structure that allows for a fair comparison in terms of the number of model parameters among the baselines and proposed methods However as discussed in Section Modularization the modular system needs fewer parameters and training iterations to achieve good performance Notably the use of enhanced signals after speech separation as training data of the speech recognition module tends to degrade the ASR performance and isnt included The reason can be from the sporadic distortions that signal processing inevitably adds similar to what has been observed in BIBREF18 In the evaluation stage a 30kvocabulary language model derived from the most common words in the Switchboard and Fisher corpora is used The decoder uses a statically compiled unigram graph and dynamically applies the language model score The unigram graph has about 300k states and 500k arcs BIBREF30 Two outputs of the PITASR model are both used in decoding to obtain the hypotheses for two talkers For scoring we evaluated the hypotheses on the pairwise score mode against the two references and used the assignment with better word error rate WER for each utterance BIBREF13 Only the average WER of two output streams is reported as the task is to correctly recognize all words from both speakers Table 1 shows the performance of the naive joint modeling PITASR compared with that of the separately optimized system The first row shows the performance of the joint training baseline model in this corpus PITASR BIBREF13 Compared with the 0dB WER result listed in BIBREF13 5580 the performance is reasonable As discussed in Section Modularization the separately optimized system has a similar number of parameters but different model architecture Thus to make a fair comparison the model with 6 streamindependent layers in the bottom and 4 parametershared streamdependent layers in the top denoted as 6 cdot 4 is listed in the second row The learnable structure is the same to the dashdot blocks shown in Figure 2 e but trained from scratch as 10 cdot 0 The performance of the 6 cdot 4 structure is significantly better than that of the 10 cdot 0 structure The reason is that unlike in the pure speech separation task the speech recognition stage in this task is also very hard and needs more nonlinear layers It also shows that this task is much harder than the speech separation so better joint training method to fulfill the performance of each module is critical to the success Without finetuning parameters the performance of the separately optimized system is shown in the third row The significantly worse performance comes from the feature mismatch in Equation 6 With finetuning parameters the performance is restored in the fourth and fifth rows The system in the fifth row can be viewed as a strong baseline with separate optimization and finetuning The better performance of the progressive joint training is from better model generalization and training efficiency Figure 4 show the effect Training curves of both joint modeling ie the second row in the table and progressive joint modeling ie the fifth row in the table are plotted From the figure both better starting point and better converged minimum can be observed in the joint progressive training With better joint training strategy shown in Section Selftransfer Learning Based Joint Modeling such modeling effect can be further fulfilled Table 2 shows the performance of the joint modeling from different modular initialization setups All modular systems are finetuned after joint training The first and second rows show the naive joint training models with BLSTM and BLSTM combined with CNN respectively 6 cdot 4 BLSTM refers to 6 layers BLSTM in the bottom and 4 parametershared layers for each output in the top ie 6 cdot 4 in Table 1 Layerwise context expansion with attention LACE model is used for CNN BIBREF36 which is a TDNN BIBREF37 variant where each higher layer is a weighted sum of nonlinear transformations of a window of lower layer frame Each LACE block starts with a convolution layer with stride 2 which subsamples the input and increases the number of channels This layer is followed by three RELUconvolution layers with jump links The channel size is 48 The slightly different setup compared with BIBREF30 is to make parameter number of one LACE block comparable with one layer of bidirectional LSTM with 768 memory cells ie 5M parameters Only one block of LACE is used to replace one layer of BLSTM as the framewise interpreting module because the speaker tracing and speech recognition modules are the hardest parts in the problem The other parts of the structure are the same and the proposed structure is denoted as 1 LACE 5 cdot 4 BLSTM From the table it can be observed that theres no improvement by merely stacking these kinds of neural networks together and jointly training them In the third and fourth rows the model is firstly initialized with framewise interpreting Figure 2 b speaker tracing Figure 2 c and speech recognition Figure 2 d tasks respectively and then jointly trained Comparing the fourth row to the third row 1 LACE 5 cdot 4 BLSTM shows larger improvement than 6 cdot 4 BLSTM Two conclusions can be derived from the results i CNN structure is more suitable for the framewise interpreting module because it focuses on the local context and has better modeling power of frequency variations BIBREF38 Meanwhile LSTM is good at temporal modeling which is more suitable for the speaker tracing and speech recognition module The architecture with 1 LACE 5 cdot 4 BLSTM layers combines their strength Notably BIBREF38 proposes a similar structure called CLDNN to form the acoustic model with modules focusing on different scales and take advantage of the complementarity of CNN LSTM and DNN The difference is that to fulfill the respective advantages in modeling the proposed method further pretrains each module with different criteria ii As the performance improvement from modularization and initialization is much larger in 1 LACE 5 cdot 4 BLSTM it shows that modulewise initialization is important to fulfill the modeling power of neural networks especially with different structures and scales To further analyze the framewise interpreting ability of CNN and BLSTM experiments without framewise interpreting initialization are conducted in the fifth and sixth rows This time the performances are similar both in 6 cdot 4 BLSTM and 1 LACE 5 cdot 4 BLSTM It shows that in initialization of the speaker tracing module both BLSTM and CNN can spontaneously learn the framewise interpreting ability We notice that in BIBREF12 framewise PIT training doesnt show good performance which is similar to our observation Table 3 shows the performance improvement of the transfer learning applied to joint modeling For transfer learning the interpolation weight between hard and soft labels is 05 The original PITASR system is in the first row and a better PITASR baseline with 6 cdot 4 structure in Table 1 is also included in the fourth row The ensemblebased transfer learning proposed in Section UID37 is tested in the second row The ensemble contains 3 types of structure 9 cdot 1 6 cdot 4 and 3 cdot 7 where the left number denotes the bottom streamindependent layers and the right number denotes the top streamdependent layers The student network learns from each teacher onebyone Although its not a large gain it shows improvement after learning from each teacher The result of replacing hard labeling with simultaneous clean speech based transfer learning is listed in the third and the fifth rows In both model architectures transfer learning brings about a relative 10 improvement over the respective baseline which is comparable with the result in BIBREF17 It shows that soft distribution inferred by the model with similar architecture is superior to the hard labeling The selftransfer learning based progressive joint modeling is finally listed in the sixth and the seventh row by using CEtrained and MMItrained clean speech teacher respectively Notably as the model framework discussed in Transfer Learning Based Joint Training the initializations of speech recognition modules are the respective clean speech teachers The result shows over 30 relative improvement Comparing the third and the fifth rows with Table 2 it can be further observed that combining progressive joint training and selftransfer learning brings about even larger improvement compared with the summation of the relative improvement from each of the two technologies The learning curve of the proposed method is also shown in Figure 4 From these results we conclude i The proposed method brings about faster convergence and better converged minimum The reason is discussed in Section Transfer Learning Based Joint Training The better convergence result also comes from the removal of the inappropriate hard alignment in the joint training ii Easier convergence helps the model fulfill the best performance in each module Thats the explanation of the even better synergy result compared with the summation of the relative improvements from transfer learning and progressive joint training iii Better teacher generates better student And the MMItrained distribution can also be transferred to the student model similar to what has been observed in BIBREF39 Figure 5 further shows the student performance versus quality of the teacher in transfer learning based joint modeling It can be observed that better student can usually be obtained with better teacher An important reason is that selftransfer learning is conducted by minimizing the divergence of its own distributions in mixed speech and clean speech Thus better original distribution including MMItrained distribution can intrinsically be part of the joint model and brings about better results The only inflection point is in epoch3 of the MMI teacher where the student performance is similar to epoch1 although the teacher model has better WER performance We believe the reason is that the distribution of the teacher model of epoch3 is hard to transfer to the student model because of the transition process from CEtrained distribution to MMItrained distribution Table 4 shows the performance improvement of sequence discriminative training based joint modeling In this experiment the baseline is PITASR model without progressive joint training All the structures are kept the same as 10 cdot 0 and only criteria are changed kappa 01 which is in accordance with the decoding setup The senone level language model for competing hypothesis modeling is obtained from the clustered triphone state alignment Trigram is used similar to BIBREF30 For the artificial swapped word method the hyperparameters in Section UID46 is decided by the preliminary experiment alpha 04 beta 10 and gamma 2 The swapped word senone level search graph is 4 times larger than the original trigram senone level language model graph lambda 01 b01 and hatb02 The baseline PITASR system is shown in the first row denoted as PITCE to show the criterion of the system Applying the naive sequence discriminative training method for multiple outputs in the second row only brings about 49 relative improvement For the recognition result word precision is improved but insertion error increases The reason is from imperfect search space modeling of swapped words discussed in Multioutput Sequence Discriminative Training By applying the proposed LFDCMMI method in the third row the recognition result is significantly improved compared with both baseline and the naive LFMMI The proposed method minimizes the swapped words from parallel output streams in the denominator modeling Thus the problem can be alleviated The fourth and fifth rows show the effect of using bMMI instead of MMI in the formulations The bMMI criterion boosts the ASR errors in the denominator modeling implicitly including possibly swapped words Although significant improvement can be observed between LFMMI and LFbMMI the LFDCbMMI explicitly includes the swapped words and achieves an even better result The proposed artificial swapped word method is shown in the sixth and seventh rows By comparing the sixth row with the second row and comparing the seventh row with the fifth row it shows slight but consistent improvement in solving the swapped word problem And the method can also be combined with LFDCbMMI to achieve 82 relative improvement versus the CEtrained PITASR baseline In the eighth to tenth rows experiments are conducted on 150 hours corpus The results are similar and LFDCbMMI criterion shows consistent improvement versus PITCE and naive sequence discriminative training criterion In Section Combination and Extension to Larger Corpus it is shown that sequence discriminative training can be combined with other technologies and achieves further consistent and significant improvement The discriminative training criterion helps the system training in two ways Firstly sequence level criterion helps the sequence level speaker tracing problem in PIT modeling Specifically linguistic information is encoded in the senone level language modeling in discriminative training Thus the procedure implicitly integrates linguistic information in the speaker tracing problem Secondly sequence discriminative training improves the speech recognition module Notably all the sequence discriminative training procedures are applied after CE initialization as in BIBREF30 With initialization it also helps Equation 44 to reach a better minimum Table 5 summarizes the performance improvement of integrating all the proposed methods The PITASR model BIBREF13 denoted as PITCE is taken as the baseline of naive joint modeling in the first row The separately optimized system namely PITSSASR is not included here As shown in Table 2 the performance deteriorates because of feature mismatch Instead the proposed progressive joint training model in the second row can be taken as a stronger separately optimized system with finetuning The proposed selftransfer learning based joint training model shows further significant improvement in the third and fourth rows Finally the multioutput sequence discriminative training is applied and achieves moderate improvement although the teacher model is already MMItrained similar to what has been observed in BIBREF40 Figure 6 shows decoding examples of the proposed methods versus the PIT baseline The baseline contains many errors due to bad model generalization with limited size of dataset With the proposed methods errors are significantly reduced Notably in this example the selftransfer learning based progressive joint training mainly reduces errors from similar pronunciations while sequence discriminative training mainly reduces explicit syntax or linguistic errors which is in line with the expectation With similar number of parameters but different neural networks namely 1 LACE 5 cdot 4 BLSTM the system can be consistently improved in the sixth to ninth rows We believe the further improvement comes from the proper modularization of the problem which is discussed in Section Separate Optimization vs Joint Modeling Table 6 expands the dataset to 150 hours to show the effect of more training data The naive joint training baseline in the first row significantly benefits from more data and shrinks the gap to the proposed progressive joint training model in the second row However it still even significantly worse than the selftransfer learning and sequence discriminative training based joint model trained in 50 hours data in Table 5 It again shows the disadvantages of large model complexity and insufficient model generalization discussed in Section Unsupervised Singlechannel Overlapped Speech Recognition ie compared with merely increasing data the better method to solve the problem is to improve the model generalization Besides the convergence speed of naive joint training model in the larger dataset is even slower namely 4 times more epochs versus the proposed method Comparing Table 6 with Table 5 the proposed selftransfer learning based joint training and multioutput sequence discriminative training show consistent relative improvement versus the progressive joint training Compared with Table 5 sequence discriminative training achieves larger relative improvement on the CEtrained teacher based system In both 50 hours and 150 hours corpus the proposed method achieves over 30 relative improvement respectively versus the PITASR system and the PITSSASR system The improvement comes from better model generalization training efficiency and the sequence level linguistic knowledge integration Although this paper addresses the case of simultaneous speech of two people talking at a relative level of 0dB we believe it will be straightforward to extend the system to handle more realistic conditions The case where one speaker is louder than the other has already been observed to be easier for PITstyle models than the 0dB data explored in this paper BIBREF13 For more than two speakers extension of the proposed system should follow the same construction described in BIBREF34 Finally we expect robustness to background noise and reverberation to come from standard techniques such as multicondition training BIBREF41 In this work we proposed to divide the single channel overlapped speech recognition problem into three subproblems framewise interpreting speaker tracing and speech recognition Each module is firstly optimized separately with specific designed criteria which significantly improves the system generalization and training efficiency After the initialization modules are jointly trained with two novel strategies selftransfer learning and multioutput sequence discriminative training Specifically in the joint training stage the clean speech model finetunes its parameters with other modules in overlapped speech to fit its own distribution in the simultaneous clean speech And then sequence discriminative training designed for multiple outputs is applied to integrate linguistic and sequence information The proposed framework achieves 30 relative improvement over both a strong jointly trained system PITASR and a separately optimized system PITSSASR The proposed framework shows promising perspectives of future improvements which are i Integrating stateoftheart technologies in each module in the initialization stage eg DPCL BIBREF8 ii Applying other sequence level criteria to improve the speaker tracing and speech recognition modules ie connectionist temporal classification CTC BIBREF42 iii Explicit integration of language model in the joint modeling eg joint decoding BIBREF5 and endtoend modeling BIBREF43 We thank Chris Basoglu Frank Seide for their invaluable assistance with CNTK Mike Seltzer Takuya Yoshioka Hakan Erdogan and Andreas Stolcke for many helpful conversations The first author would like to further thank Jerry and Juncheng Gu for their supports during the internship Zhehuai Chen received his BS degree in the Department of Electronic and Information Engineering from Huazhong University of Science and Technology China in 2014 He is currently a PhD candidate in Shanghai Jiao Tong University working on speech recognition His current research interests include speech recognition speech synthesis and deep learning </s>
<s>[INST] NIHRIO at SemEval-2018 Task 3: A Simple and Accurate Neural Network Model for Irony Detection in Twitter [/INST] Mining Twitter data has increasingly been attracting much research attention in many NLP applications such as in sentiment analysis BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 and stock market prediction BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 Recently Davidov2010 and Reyes2013 have shown that Twitter data includes a high volume of ironic tweets For example a user can use positive words in a Twitter message to her intended negative meaning eg It is awesome to go to bed at 3 am not This especially results in a research challenge to assign correct sentiment labels for ironic tweets BIBREF11 BIBREF12 BIBREF13 BIBREF14 BIBREF15 To handle that problem much attention has been focused on automatic irony detection in Twitter BIBREF16 BIBREF17 BIBREF18 BIBREF19 BIBREF13 BIBREF20 BIBREF21 BIBREF22 BIBREF23 BIBREF24 In this paper we propose a neural network model for irony detection in tweets Our model obtains the fifth best performances in both binary and multiclass irony detection subtasks in terms of INLINEFORM0 score BIBREF25 Details of the two subtasks can be found in the task description paper BIBREF25 We briefly describe the subtasks as follows The dataset consists of 4618 tweets 2222 ironic 2396 nonironic that are manually labelled by three students Some preprocessing steps were applied to the dataset such as the emoji icons in a tweet are replaced by a describing text using the Python emoji package Additionally all the ironic hashtags such as not sarcasm irony in the dataset have been removed This makes difficult to correctly predict the label of a tweet For example coreybking thanks for the spoiler not is an ironic tweet but without not it probably is a nonironic tweet The dataset is split into the training and test sets as detailed in Table TABREF5 Note that there is also an extended version of the training set which contains the ironic hashtags However we only use the training set which does not contain the ironic hashtags to train our model as it is in line with the test set We first describe our MLPbased model for ironic tweet detection in Section SECREF7 We then present the features used in our model in Section SECREF8 We propose to use the Multilayer Perceptron MLP model BIBREF28 to handle both the ironic tweet detection subtasks Figure FIGREF3 presents an overview of our model architecture including an input layer two hidden layers and a softmax output layer Given a tweet the input layer represents the tweet by a feature vector which concatenates lexical syntactic semantic and polarity feature representations The two hidden layers with ReLU activation function take the input feature vector to select the most important features which are then fed into the softmax layer for ironic detection and classification Table TABREF11 shows the number of lexical syntactic semantic and polarity features used in our model Our lexical features include 1 2 and 3grams in both word and character levels For each type of INLINEFORM0 grams we utilize only the top 1000 INLINEFORM1 grams based on the term frequencyinverse document frequency tfidf values That is each INLINEFORM2 gram appearing in a tweet becomes an entry in the feature vector with the corresponding feature value tfidf We also use the number of characters and the number of words as features We use the NLTK toolkit to tokenize and annotate partofspeech tags POS tags for all tweets in the dataset We then use all the POS tags with their corresponding tfidf values as our syntactic features and feature values respectively A major challenge when dealing with the tweet data is that the lexicon used in a tweet is informal and much different from tweet to tweet The lexical and syntactic features seem not to wellcapture that property To handle this problem we apply three approaches to compute tweet vector representations Firstly we employ 300dimensional pretrained word embeddings from GloVe BIBREF29 to compute a tweet embedding as the average of the embeddings of words in the tweet Secondly we apply the latent semantic indexing BIBREF30 to capture the underlying semantics of the dataset Here each tweet is represented as a vector of 100 dimensions Thirdly we also extract tweet representation by applying the Brown clustering algorithm BIBREF31 BIBREF32 a hierarchical clustering algorithm which groups the words with similar meaning and syntactical function together Applying the Brown clustering algorithm we obtain a set of clusters where each word belongs to only one cluster For example in Table TABREF13 words that indicate the members of a family eg mum dad or positive sentiment eg interesting awesome are grouped into the same cluster We run the algorithm with different number of clustering settings ie 80 100 120 to capture multiple semantic and syntactic aspects For each clustering setting we use the number of tweet words in each cluster as a feature After that for each tweet we concatenate the features from all the clustering settings to form a clusterbased tweet embedding Motivated by the verbal irony by means of polarity contrast such as I really love this years summer weeks and weeks of awful weather we use the number of polarity signals appearing in a tweet as the polarity features The signals include positive words eg love negative words eg awful positive emoji icon and negative emoji icon We use the sentiment dictionaries provided by BIBREF33 to identify positive and negative words in a tweet We further use boolean features that check whether or not a negation word is in a tweet eg not nt We use Tensorflow BIBREF34 to implement our model Model parameters are learned to minimize the the crossentropy loss with L INLINEFORM0 regularization Figure FIGREF16 shows our training mechanism In particular we follow a 10fold crossvalidation based voting strategy First we split the training set into 10 folds Each time we combine 9 folds to train a classification model and use the remaining fold to find the optimal hyperparameters Table TABREF18 shows optimal settings for each subtask In total we have 10 classification models to produce 10 predicted labels for each test tweet Then we use the voting technique to return the final predicted label The metrics used to evaluate our model include accuracy precision recall and F INLINEFORM0 The accuracy is calculated using all classes in both tasks The remainders are calculated using only the positive label in subtask 1 or per class label ie macroaveraged in subtask 2 Detail description of the metrics can be found in BIBREF25 Table TABREF21 shows our official results on the test set for subtask 1 with regards to the four metrics By using a simple MLP neural network architecture our system achieves a high performance which is ranked third and fifth out of fortyfour teams using accuracy and F INLINEFORM0 metrics respectively Table TABREF23 presents our results on the test set for subtask 2 Our system also achieves a high performance which is ranked third and fifth out of thirtytwo teams using accuracy and F INLINEFORM0 metrics respectively We also show in Table TABREF24 the performance of our system on different class labels For ironic classes our system achieves the best performance on the verbal irony by means of a polarity contrast with INLINEFORM1 of 6073 Note that the performance on the situational class is not high The reason is probably that the number of situational tweets in the training set is small 2053834 ie not enough to learn a good classifier Apart from the described MLP models we have also tried other neural network models such as Long ShortTerm Memory LSTM BIBREF35 and Convolutional Neural Network CNN for relation classification BIBREF36 We found that LSTM achieves much higher performance than MLP does on the extended training set containing the ironic hashtags about 92 vs 87 with 10fold crossvalidation using INLINEFORM0 on subtask 1 However without the ironic hashtags the performance is lower than MLPs We also employed popular machine learning techniques such as SVM BIBREF37 Logistic Regression BIBREF38 Ridge Regression Classifier BIBREF39 but none of them produces as good results as MLP does We have also implemented ensemble models such as voting bagging and stacking We found that with 10fold crossvalidation based voting strategy our MLP models produce the best irony detection and classification results We have presented our NIHRIO system for participating the Semeval2018 Task 3 on Irony detection in English tweets We proposed to use Multilayer Perceptron to handle the task using various features including lexical features syntactic features semantic features and polarity features Our system was ranked the fifth best performing one with regards to F INLINEFORM0 score in both the subtasks of binary and multiclass irony detection in tweets This research is supported by the National Institute for Health Research NIHR Innovation Observatory at Newcastle University United Kingdom </s>
<s>[INST] What comes next? Extractive summarization by next-sentence prediction [/INST] Writing a summary is a different task compared to producing a longer article As a consequence it is likely that the topic and discourse moves made in summaries differ from those in regular articles In this work we present a powerful extractive summarization system which exploits rich summaryinternal structure to perform content selection redundancy reduction and even predict the target summary length all in one joint model Text summarization has been addressed by numerous techniques in the community BIBREF0 For extractive summarization which is the focus of this paper a popular task setup is to generate summaries that respect a fixed length limit In the summarization shared tasks of the past Document Understanding Conferences DUC these limits are defined in terms of words or bytes As a result much work has framed summarization as a constrained optimization problem in order to select a subset of sentences with desirable summary qualities such as informativeness coherence and nonredundancy within the length budget BIBREF1 BIBREF2 BIBREF3 One problem with this setup is that it does not match many realworld summarization settings For example writers can tailor the length of their summaries to the amount of noteworthy content in the source article Summaries created by news editors for archives such as the New York Times Annotated Corpus BIBREF4 exhibit a variety of lengths There is also evidence that in the context of web search people prefer summaries of different lengths for the documents in search results depending on the type of the search query BIBREF5 More generally current systems focus heavily on properties of the source document to learn to identify important sentences and score the coherence of sentence transitions They reason about the content of summaries primarily for purposes of avoiding redundancy and respecting the length budget But they ignore the idea that it might actually be useful to learn content structure and discourse planning for summaries from large collections of multisentence summaries This work proposes an extractive summarization system that focuses on capturing rich summaryinternal structure Our key idea is that since summaries in a domain often follow some predictable structure a partial summary or set of summary sentences should help predict other summary sentences We formalize this intuition in a model called NextSum which selects the next summary sentence based not only on properties of the source text but also on the previously selected sentences in the summary An example choice is shown in Table 1 This setup allows our model to capture summaryspecific discourse and topic transitions For example it can learn to expand on a topic that is already mentioned in the summary or to introduce a new topic It can learn to follow a script or discourse relations that are expected for that domains summaries It can even learn to predict the end of the summary avoiding the need to explicitly define a length cutoff The core of our system is a nextsentence prediction component which is a feedforward neural network driven by features capturing the prevalence of domain subtopics in the source and the summary sentence importance in the source and coverage of the source document by the summary so far A full summary can then be generated by repeatedly predicting the next sentence until the model predicts that the summary should end Since summaryspecific moves may depend on the domain we first explore domainspecific summarization on eventoriented news topics War Crimes Assassinations Bombs from the New York Times Annotated Corpus BIBREF4 We also train a domaingeneral model across multiple types of events NextSum predicts the next summary sentence with remarkably high accuracies reaching 67 compared to a chance accuracy of 9 The generated summaries outperform the lead baseline as well as domainspecific summarization baselines without requiring explicit redundancy check or a length constraint Moreover the system produces summaries of variable lengths which correlate with how long human summaries are for the same texts Many approaches to extractive summarization are unsupervised and focus on the role of word frequency and source document representation for selecting informative and nonredundant content BIBREF6 BIBREF7 BIBREF8 More recently supervised approaches are popular which view content selection as a sentencelevel binary classification problem typically using a neural network BIBREF9 BIBREF10 Using source structure Source structure is a common cue for summarization Relative word frequency and position of sentences are standardly used in many systems Discourse and graphbased summarization techniques explicitly focus on computing document structure BIBREF11 BIBREF12 BIBREF13 Other techniques include learning probabilistic topic models over source articles within a domain to capture subtopics and transitions between them BIBREF14 BIBREF15 BIBREF16 But the use of structure from summaries is less explored Using summary structure Actually almost all systems maintain some representation of the partial summary at a timestep At the very least it is needed for respecting a length limit and for preventing redundancy Even in recent neural network based extractive summarization a representation of the summary so far has been proposed to allow redundancy checks BIBREF10 However current methods do not focus on capturing rich summary discourse and content structure Recent abstractive neural summarization models based on encoderdecoder frameworks actually have greater scope for capturing summary structure and content The use of techniques such as attention and pointer mechanisms can be viewed as a form of summary structure modelling BIBREF17 BIBREF18 BIBREF19 BIBREF20 However because such systems currently operate at the word level these mechanisms are mostly used for handling issues such as grammaticality outofvocabulary items predicateargument structure and local coherence By contrast we aim to capture higherlevel transitions in the contents of a summary Nextsentence prediction The way we learn summary structure is by training a module for next summary sentence prediction A parallel idea can be found in the form of nextutterance prediction in retrievalbased dialogue systems BIBREF21 BIBREF22 BIBREF23 There have also been recent attempts at predicting the next sentence in text The skipthought model BIBREF24 is trained to predict a sentence from its neighbouring sentences to produce sentence representations CLSTM2016 and pichottaacl16 evaluate neural language models on nextsentence and event prediction In contrast we aim to predict the next output sentence within the tangible application of summarization We first present the key ideas and the next section explains how we implement the model NextSum comprises two components a nextsentence prediction system and a summary generation module The first is a supervised system trained to select the next summary sentence given a set of candidate sentences from the source and the summary generated so far NextSums generation component builds a summary by making repeated calls to the nextsentence predictor The nextsentence predictor is trained on a corpus of source articles and their goldstandard summaries written by humans In this work we focus on singledocument summarization Consider a source article X lbrace s1sMrbrace containing M sentences and a goldstandard extractive summary Gg1gN a sequence of N sentences Since G is extractive G subseteq X In NextSum summaries are created by adding one sentence at a time Let YTy1yT be the partial summary at timestep T YT has T sentences At time T1 the goal of NextSum is to score a set of candidate sentences from the source CT1 lbrace s1sKrbrace CT1 subseteq X and find the best next sentence to follow YT Let the goldstandard next sentence be gT1 The set CT1 may either be all of the source sentences which have not yet been included in the summary or be limited to a smaller size T0 For now assume that all the unselected source sentences are in the candidate set and thus T1 The model selects the next summary sentence from CT1 such that hatyT1 arg max si in CT1 PrsiXYTtheta When there is a tie the earlier sentence in the article is selected In this work PrsiXYTtheta is estimated by a neural network parameterized by theta Recall that the oracle next sentence gT1 is in CT1 Hence one approach to learn the parameters of PrsiXYTtheta is to frame it as a binary classification problem where the label for sentence gT1 in CT1 is 1 and 0 for all sw in CT1 where sw ne gT1 We implement this classifier using a feedforward neural network which takes the encoded representations of X YT and theta 0 and outputs the probability of label 1 theta 1 which we use as theta 2 The loss for the classification at timestep theta 3 is the binary crossentropy loss L log pgT1 sum si in CT1 si ne gT1 log 1psi One of the special features of NextSum is that we model the end of the summary within the same setup To do so we introduce a special sentence langle textrm EOSrangle End of Summary to mark the end of every goldstandard summary ie Gy1yNlangle textrm EOSrangle In the model langle textrm EOSrangle is included in candidate sets at every timestep This inclusion allows the model to learn to discriminate between selecting a sentence from the source versus ending the summary by picking the langle textrm EOSrangle marker Thus our candidate set is in fact Cprime T1 CT1 cup lbrace langle textrm EOSrangle rbrace After the next sentence prediction model is trained it can be used to generate a complete summary for a source article The model performs this task by iteratively predicting the next sentence until langle textrm EOSrangle is selected Note that unlike previous work the generation component is not given the target length of the summary To generate the full summary the model employs a greedy method that simply calls the nextsentence prediction module repeatedly until langle textrm EOSrangle is selected We also tried beam search decoding for a more globally optimal sequence of sentences but we found in preliminary experiments that this search did not improve our results We developed two versions of our system Previous methods of summary content selection assume a fixed length limit To compare against these systems in one version of our model NextSum L the length limit is provided as a constraint If after the model generates a summary sentence the word count exceeds the given length we stop generation and truncate the last sentence so the summary is within the length limit The second version NextSum is the full model which predicts the summary length Both systems have no access to the oracle partial summary and use their own previous decisions to construct the partial summary We evaluate all the summaries by comparing them with goldstandard abstracts using ROUGE BIBREF30 We use ROUGE2 Fscore as NextSum generates summaries of varied length In all these systems the target length of the summary is given as a constraint We set the length k to the average length in words of summaries in the training dataset for each domain Table 3 Lead takes the first k words from the source article For singledocument extractive summarization the lead is a very strong baseline which many systems fail to beat BIBREF31 CHMM is the approach used by barzilay04 for extractive summarization using content models CHMM computes an importance score for each topic v This score is a probability computed by 1 counting the articles in the training set where v appears in both the article and its summary 2 and normalizing by the number of articles containing v To generate a summary the model ranks the topics in order of decreasing importance and adds one sentence from the source for each topic breaks ties randomly if multiple sentences decoded into the same topic The generation stops upon reaching the length limit This method scores the summaryworthy nature of sentences based solely on their topic Transition is an iterative greedy approach based on the transition probability of topics from the content model It selects hatyT1 arg max sj in CT1Ptextrm topicsjtextrm topicyT at each timestep until the length limit is reached This baseline simulates a degenerate version of nextsentence prediction where the choice is based on a single feature at topic level ie the probability of transitioning from the topic of the last summary sentence to the topic of the candidate Like our model this baseline has no access to the oracle partial summary and uses its previous decisions for next sentence selection CHMMT is also an iterative greedy approach where the evaluation function is the product between topic transition probability Transition and topic importance CHMM Apart from the above domain baselines we also compare with two other types of summaries General is based on a recent competitive neural network based extractive system BIBREF9 This model is designed to be domaingeneral We trained it on the DailyMail dataset BIBREF28 containing around 200K articles and their highlights without using pretrained embeddings Our systems are not directly comparable because NextSum is trained on much less data but we show this result to give an idea of the performance of recent methods Oracle is the goldstandard extractive summary created from abstracts using the mapping method from Section Obtaining extractive summaries It represents an upper bound on the performance of any extractive summary Table 5 shows the ROUGE2 Fscore results for all the systems The baselines NextSum L oracle and general are fixed length summaries Among the baselines we see that the simple lead summary comprising the first k words of the source article is the strongest outperforming domaintrained content model systems in all the domains The oracle results however show that there is still considerable scope for the improvement of automatic systems performing sentence extraction The oracle extractive summary which was chosen to maximize similarity with the abstract gets close to double the ROUGE score of lead baseline in the crime domain Both NextSum L and NextSum outperform the lead with statistical significance in all cases except the bombs domain Importantly NextSum which does automatic length prediction outperforms NextSum L indicating that automatically tailoring summaries to different lengths is clearly of value In the next section we examine this length prediction ability in detail Comparing performance across domains the source articles in bombs domain are on average longer than the other domains refer Table 3 which could be a reason that content selection performance is lower here This domain also has longer gold standard summaries and the correlation between the length of human abstracts and source articles is also the lowest in this domain The domaingeneral system of chenglapata2016 is trained on a much larger general corpus of summaryarticle pairs While our results are not directly comparable we see that NextSums performance is competitive with current methods and since it is based on a new outlook and no explicit constraints it provides much scope for future improvements In this section we explain how we select the candidate set what features we use in the neural network for next sentence prediction and the design of the generation component Some source articles are very long which means that Cprime T1 can contain many candidate sentences if we take all of the unselected sentences as candidates In practice we limit the size of Cprime T1 in order to reduce the search space of the model which improves running time In the singledocument scenario the source text sentences are in a natural discourse and thus in a logical and temporal order Hence it is not unreasonable to assume that a good summary is a subsequence of the source Given this assumption suppose the last sentence chosen for the summary is sj at timestep T then we consider the K sentences in the source immediately following sj as the candidate set at time T1 During development we found that when K10 the goldstandard next summary sentence is in the candidate set 90 of the time and is present 80 of the time when using K 5 Based on this empirical support for the subsequence hypothesis we use K10 plus the end of summary marker for all the experiments in this paper for a total candidate set size of 11 For comparison a source article in our corpus has on average 33 sentences and the maximum is as high as 500 sentences During training when fewer than 10 sentences remain we randomly sample other sentences from the entire article to ensure having enough negative samples The model is trained on balanced dataset by downsampling and tested on the distribution where each candidate set has size 11 We have a source document Xlbrace s1sMrbrace with M sentences YTy1yT is a partial summary at time T and let s be a sentence or langle EOS rangle in the candidate set Cprime T1 NextSums next sentence prediction relies on computing PrsXYTtheta using a feedforward neural network with parameters theta This network learns from rich featurebased representations of M0 M1 M2 and their interactions Domain subtopics These features are based on topics induced from a large collection of documents in the same domain as the source article These topics are obtained using the contentmodel approach of barzilay04 The content model is a Hidden Markov Model HMM where the states correspond to topics and transitions between them indicate how likely it is for one topic to follow another The emission distribution from a state is a bigram language model indicating what lexical content is likely under that topic Each sentence in the article is emitted by one state ie one topic The probability of an article Ts1sN under a HMM with M states lbrace textrm topic1textrm topicMrbrace is given by sum textrm topic1textrm topicn prod i1N Ptextrm topicitextrm topici1Psitextrm topici Content models can be trained in an unsupervised fashion to maximize the log likelihood of the articles from the domain We choose the number of topics on a development set Once trained the model can compute the most likely state sequence for sentences in the source document and in the partial summary using Viterbi decoding Based on the predicted topics we compute a variety of features noitemsep the proportion of source sentences assigned to each topic the proportion of sentences in the partial summary assigned to each topic the most likely topic of the candidate s given by arg max i in textrm Topics Ptextrm topicis the emission probability of s from each topic the transition probability between the topic of the previous summary sentence yT and the topic of s Ptextrm topicstextrm topicyT a global estimation of observing the candidate s Ps sum i in textrm TopicsPstextrm topici Content We compute an encoding of source summary so far and the candidate sentence by averaging the pretrained word2vec embeddings BIBREF25 trained on Google News Corpus of each word in the span 900 features in total 300 each for the source summary so far and the candidate We also add features for the 1000 most frequent words in the training articles in order to encode their presence in s and in the sentence previous to s in the source article ie s1 Similarly for s and s1 we record the presence of each partofspeech tag and named entity We expect these features for s and s1 are useful for predicting langle textrm EOSrangle since the last sentence in a summary might contain some lexical cues Redundancy These features calculate the degree to which the candidate sentence overlaps with the summary so far They include textit sims yt for t T T1 T2 3 features where textit simpq is computed using cosine similarity between count vector representations of the words in s and yt We also include the number of overlapping nouns and verbs between s and YT 2 features Position The position of a sentence in the source document is an important indicator for content selection and is widely used in systems We indicate the position in the source of the last generated summary sentence yT as one of 5 bins the size of each bin depends on the length of the source article We also indicate the position of the candidate sentence and its distance to yT in the source normalized by the length of the source Length We include features for the length of the source both as number of sentences and number of words binned into 5 bins We also include the number of sentences and words in the summary so far The length measures for the partial summary are not binned Coverage These features compute how much of the source will be covered by the summary when a candidate sentence is added to it We use the KL divergence between the source and candidate summary when s is included in it DKL XVert YT cup lbrace srbrace where the distribution of X and YT are unigram language models Sentence importance We also indicate the individual importance of a candidate sentence The frequency of a word in the source is known to be a strong feature for importance BIBREF26 With this intuition we include the frac1ssum w in s textrm uniXw where w is a token in the candidate sentence and textrm uniXw is the unigram probability of w in the source X We also use a separate pretrained model of word importance This model feeds the context of a target word the two words before and two words after into a LSTM model which outputs the probability of the target word appearing in a summary The importance score of a sentence is then the average and maximum of the predicted scores of each word in the sentence This model is trained on the same training and development data sets We hypothesize that nextsentence prediction is more likely to be successful in eventoriented domains describing events as opposed to explanations and opinions Moreover summaryspecific moves may be more prominent and learnable from summaryarticle pairs within specific domains compared to a general corpus So we create three domainspecific and one domaingeneral dataset all focusing on events We use the New York Times Annotated Corpus NYtimes BIBREF4 since it provides topic metadata has thousands of articlesummary pairs on different topics and summaries are not written to set lengths We selected three topics War Crimes and Criminals crime Assassinations and Attempted Assassinations assassin and Bombs and Explosives bombs We also create a more general dataset mixed by randomly sampling from all the three domains We sample a similar number of articles across each domain and randomly split each domain into 80 training 10 development and 10 test data Table 2 shows the sizes of these datasets We use the Stanford CoreNLP toolkit BIBREF27 to tokenize segment sentences and assign part of speech tags to all the texts As previously mentioned summaries are often written to express the summaryworthy content of an article and not restricted to an arbitrary length This property can be seen in our data Table 3 The NYTimes summaries are abstractive in nature and range from a minimum of 2 words to as many as 278 words The last column of the table gives the Kendall Tau correlation corrected for ties between the length of the source and the summary There is a significant positive correlation implying that the length of the article is indicative of its information content This finding motivates us to include the length of the source article as a feature for next sentence prediction though we note that the source length by itself is not enough to determine the summary length without doing further analysis of the source content The summaries from NYTimes are abstractive in nature Our system is extractive and for training the next sentence selection from the source we need a mapping between the abstractive summary and the sentences in the source article Note that we create these extractive summaries only for training our model We will evaluate NextSums output by comparing with the abstractive human summaries as is standard practice We map each sentence in the abstract to the most similar sentence in the source article Let Aa1an be the sequence of sentences in the abstract For each ai we find yi arg max sj in Xcos aisj where X is the set of source sentences and cos pq is the cosine similarity between the word unigrams of p and q The sequence Yy1yn corresponding to Aa1an forms the gold standard extractive summary Since the extractive summary mirrors the sequence of content in the abstract the structure of the summary is preserved allowing our next sentence prediction system to be trained on the extractive sequence of sentences It is also for this reason that we do not use summarization datasets such as the CNNDaily Mail corpus BIBREF28 where summaries are threesentence highlights and do not have any discernible discourse structure as a whole We first evaluate our model intrinsically on the nextsentence prediction task then test its performance on the full summary generation problem Here the goal is to select the best sentence to follow the partial summary from a candidate set of 11 options see Section Data For evaluating this part of our system we assume that we have oracle partial summaries ie the partial summary at timestep T YTy1yT is the same as the gold summary sequence up to time T The question is how well we can predict the next sentence in this sequence from the candidate set Cprime T1 The correct answer is the sentence in the gold standard at position T1 The prediction at each timestep is a separate classification example Recall that we framed the machine learning problem as one of binary classification We thus present two sets of results a on the binary task and b on the final choice of one sentence from the candidate set among the 11 candidates In task a the binary evaluation the model discriminates among the 2 classes by thresholding at ps05 The best setting has 4 hidden layers each layer comprising between 500 to 1500 neurons We trained the model by backpropagation using the Adam optimizer BIBREF29 for up to 75 epochs Hyperparameters were tuned on the development set The choice of a final sentence Task b is made by picking the candidate sentence with highest ps Table 4 shows the accuracy on binary classification task and 1of11 task on the different domains In the 1of11 task the expected chancelevel accuracy is roughly 91 since we force every candidate set to have size 11 Our nextutterance prediction systems accuracy is between 60 to 67 on the different domains showing that there are distinctive clues on summary internal structure and content which can be learned by a model Note also that the accuracy numbers are consistent across all domains and the mixed case indicating that the patterns are fairly domaingeneral within eventoriented documents These evaluations are somewhat idealistic in that the model has access to oracle partial summaries during prediction We next evaluate NextSum on the full summarization task NextSum requires neither redundancy removal nor length constraints In this section we show that our system produces summaries of varied lengths which correlate with the lengths of humanwritten summaries of the same source article Figure 1 shows the distribution of the length in words of NextSum summaries all domains put together The generated lengths vary greatly and span the average range covered by the summaries in the training data The majority of lengths are in the 30 to 50 words limit Hence NextSum is specializing summary lengths to cover a wide range Next we measure how well these summary lengths correlate with the lengths of the humanwritten abstracts Table 6 shows the Kendall Tau correlation corrected for ties between length in words of the NextSum summary and the length of the abstract for the same source NextSums summary lengths correlate fairly well with those of the abstracts leading to significant numbers in all the domains and the mixed case Again the length prediction is worse on the Bombs domain compared to the rest Overall this result shows promise that we can develop summarization systems which automatically tailor their content based on properties of the source In this work we have presented the first summarization system which integrates content selection summary length prediction and redundancy removal Central to this system is the use of a nextsentence prediction system which learns summaryinternal discourse transitions We show that NextSum outperforms a number of baselines on ROUGE2 Fscores even when the summary length is not provided to the system Furthermore the lengths of the predicted summaries correlate positively with the lengths of humanwritten abstracts indicating that our method implicitly captures some aspect of how much summaryworthy content is present in the source article In future work we plan to investigate whether this approach also leads to more coherent summaries This issue will be especially important in the multidocument setting which we would also like to investigate using an extension of our model </s>
<s>[INST] Translation of Patent Sentences with a Large Vocabulary of Technical Terms Using Neural Machine Translation [/INST] Neural machine translation NMT a new approach to solving machine translation has achieved promising results BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 An NMT system builds a simple large neural network that reads the entire input source sentence and generates an output translation The entire neural network is jointly trained to maximize the conditional probability of a correct translation of a source sentence with a bilingual corpus Although NMT offers many advantages over traditional phrasebased approaches such as a small memory footprint and simple decoder implementation conventional NMT is limited when it comes to larger vocabularies This is because the training complexity and decoding complexity proportionally increase with the number of target words Words that are out of vocabulary are represented by a single unknown token in translations as illustrated in Figure 1 The problem becomes more serious when translating patent documents which contain several newly introduced technical terms There have been a number of related studies that address the vocabulary limitation of NMT systems Jean el al Jean15 provided an efficient approximation to the softmax to accommodate a very large vocabulary in an NMT system Luong et al Luong15 proposed annotating the occurrences of a target unknown word token with positional information to track its alignments after which they replace the tokens with their translations using simple word dictionary lookup or identity copy Li et al Li16 proposed to replace outofvocabulary words with similar invocabulary words based on a similarity model learnt from monolingual data Sennrich et al Sennrich16 introduced an effective approach based on encoding rare and unknown words as sequences of subword units Luong and Manning Luong16 provided a characterlevel and wordlevel hybrid NMT model to achieve an open vocabulary and Costajuss and Fonollosa Jussa16 proposed a NMT system based on characterbased embeddings However these previous approaches have limitations when translating patent sentences This is because their methods only focus on addressing the problem of unknown words even though the words are parts of technical terms It is obvious that a technical term should be considered as one word that comprises components that always have different meanings and translations when they are used alone An example is shown in Figure 1 wherein Japanese word bridge should be translated to Chinese word when included in technical term bridge interface however it is always translated as In this paper we propose a method that enables NMT to translate patent sentences with a large vocabulary of technical terms We use an NMT model similar to that used by Sutskever et al Sutskever14 which uses a deep long shortterm memories LSTM BIBREF7 to encode the input sentence and a separate deep LSTM to output the translation We train the NMT model on a bilingual corpus in which the technical terms are replaced with technical term tokens this allows it to translate most of the source sentences except technical terms Similar to Sutskever et al Sutskever14 we use it as a decoder to translate source sentences with technical term tokens and replace the tokens with technical term translations using statistical machine translation SMT We also use it to rerank the 1000best SMT translations on the basis of the average of the SMT and NMT scores of the translated sentences that have been rescored with the technical term tokens Our experiments on JapaneseChinese patent sentences show that our proposed NMT system achieves a substantial improvement of up to 31 BLEU points and 23 RIBES points over a traditional SMT system and an improvement of approximately 06 BLEU points and 08 RIBES points over an equivalent NMT system without our proposed technique JapaneseChinese parallel patent documents were collected from the Japanese patent documents published by the Japanese Patent Office JPO during 20042012 and the Chinese patent documents published by the State Intellectual Property Office of the Peoples Republic of China SIPO during 20052010 From the collected documents we extracted 312492 patent families and the method of Utiyama and Isahara Uchiyama07bs was applied to the text of the extracted patent families to align the Japanese and Chinese sentences The Japanese sentences were segmented into a sequence of morphemes using the Japanese morphological analyzer MeCab with the morpheme lexicon IPAdic and the Chinese sentences were segmented into a sequence of words using the Chinese morphological analyzer Stanford Word Segment BIBREF8 trained using the Chinese Penn Treebank In this study JapaneseChinese parallel patent sentence pairs were ordered in descending order of sentencealignment score and we used the topmost 28M pairs whose Japanese sentences contain fewer than 40 morphemes and Chinese sentences contain fewer than 40 words NMT uses a single neural network trained jointly to maximize the translation performance BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF5 Given a source sentence x x1ldots xN and target sentence y y1ldots yM an NMT system uses a neural network to parameterize the conditional distributions pyl mid y lmboxx nonumber Eq 6 for 1 le l le M Consequently it becomes possible to compute and maximize the log probability of the target sentence given the source sentence log pmboxy mid mboxx sum l1M log pyly lmboxx Eq 7 In this paper we use an NMT model similar to that used by Sutskever et al Sutskever14 It uses two separate deep LSTMs to encode the input sequence and output the translation The encoder which is implemented as a recurrent neural network reads the source sentence one word at a time and then encodes it into a large vector that represents the entire source sentence The decoder another recurrent neural network generates a translation on the basis of the encoded vector one word at a time One important difference between our NMT model and the one used by Sutskever et al Sutskever14 is that we added an attention mechanism Recently Bahdanau et al Bahdanau15 proposed an attention mechanism a form of random access memory to help NMT cope with long input sequences Luong et al Luong15b proposed an attention mechanism for different scoring functions in order to compare the source and target hidden states as well as different strategies for placing the attention In this paper we utilize the attention mechanism proposed by Bahdanau et al Bahdanau15 wherein each output target word is predicted on the basis of not only a recurrent hidden state and the previously predicted word but also a context vector computed as the weighted sum of the hidden states Figure 2 illustrates the procedure of the training model with parallel patent sentence pairs wherein technical terms are replaced with technical term tokens TT1 TT2 ldots In the step 1 of Figure 2 we align the Japanese technical terms which are automatically extracted from the Japanese sentences with their Chinese translations in the Chinese sentences Here we introduce the following two steps to identify technical term pairs in the bilingual JapaneseChinese corpus According to the approach proposed by Dong et al Dong15b we identify JapaneseChinese technical term pairs using an SMT phrase translation table Given a parallel sentence pair langle SJ SCrangle containing a Japanese technical term tJ the Chinese translation candidates collected from the phrase translation table are matched against the Chinese sentence SC of the parallel sentence pair Of those found in SC tC with the largest translation probability PtCmid tJ is selected and the bilingual technical term pair langle tJtCrangle is identified For the Japanese technical terms whose Chinese translations are not included in the results of Step UID11 we then use an approach based on SMT word alignment Given a parallel sentence pair langle SJ SCrangle containing a Japanese technical term tJ a sequence of Chinese words is selected using SMT word alignment and we use the Chinese translation tC for the Japanese technical term tJ As shown in the step 2 of Figure 2 in each of JapaneseChinese parallel patent sentence pairs occurrences of technical term pairs langle tJ 1tC1 rangle langle tJ2tC2rangle ldots langle tJktCkrangle are then replaced with technical term tokens langle TT1TT1 rangle langle TT2TT2 rangle ldots langle TTkTTk rangle Technical term pairs langle tJ1tC1 rangle langle tJ2tC2rangle langle tJ2tC2rangle 0 langle tJ2tC2rangle 1 are numbered in the order of occurrence of Japanese technical terms langle tJ2tC2rangle 2 langle tJ2tC2rangle 3 in each Japanese sentence langle tJ2tC2rangle 4 Here note that in all the parallel sentence pairs langle tJ2tC2rangle 5 technical term tokens langle tJ2tC2rangle 6 langle tJ2tC2rangle 7 langle tJ2tC2rangle 8 that are identical throughout all the parallel sentence pairs are used in this procedure Therefore for example in all the Japanese patent sentences langle tJ2tC2rangle 9 the Japanese technical term ldots 0 which appears earlier than other Japanese technical terms in ldots 1 is replaced with ldots 2 We then train the NMT system on a bilingual corpus in which the technical term pairs is replaced by ldots 3 ldots 4 tokens and obtain an NMT model in which the technical terms are represented as technical term tokens Figure 3 illustrates the procedure for producing Chinese translations via decoding the Japanese sentence using the method proposed in this paper In the step 1 of Figure 3 when given an input Japanese sentence we first automatically extract the technical terms and replace them with the technical term tokens TTi i12ldots Consequently we have an input sentence in which the technical term tokens TTi i12ldots represent the positions of the technical terms and a list of extracted Japanese technical terms Next as shown in the step 2N of Figure 3 the source Japanese sentence with technical term tokens is translated using the NMT model trained according to the procedure described in Section NMT Training after Replacing Technical Term Pairs with Tokens whereas the extracted Japanese technical terms are translated using an SMT phrase translation table in the step 2S of Figure 3 Finally in the step 3 we replace the technical term tokens TTi i12ldots of the sentence translation with SMT the technical term translations As shown in the step 1 of Figure 4 similar to the approach of NMT rescoring provided in Sutskever et alSutskever14 we first obtain 1000best translation list of the given Japanese sentence using the SMT system Next in the step 2 we then replace the technical terms in the translation sentences with technical term tokens TTi i 123ldots which must be the same with the tokens of their source Japanese technical terms in the input Japanese sentence The technique used for aligning Japanese technical terms with their Chinese translations is the same as that described in Section NMT Training after Replacing Technical Term Pairs with Tokens In the step 3 of Figure 4 the 1000best translations in which technical terms are represented as tokens are rescored using the NMT model trained according to the procedure described in Section NMT Training after Replacing Technical Term Pairs with Tokens Given a Japanese sentence SJ and its 1000best Chinese translations SC n n12ldots 1000 translated by the SMT system NMT score of each translation sentence pair langle SJ SCn rangle is computed as the log probability log pSCn mid SJ of Equation 7 Finally we rerank the 1000best translation list on the basis of the average SMT and NMT scores and output the translation with the highest final score We evaluated the effectiveness of the proposed NMT system in translating the JapaneseChinese parallel patent sentences described in Section JapaneseChinese Patent Documents Among the 28M parallel sentence pairs we randomly extracted 1000 sentence pairs for the test set and 1000 sentence pairs for the development set the remaining sentence pairs were used for the training set According to the procedure of Section NMT Training after Replacing Technical Term Pairs with Tokens from the JapaneseChinese sentence pairs of the training set we collected 65M occurrences of technical term pairs which are 13M types of technical term pairs with 800K unique types of Japanese technical terms and 10M unique types of Chinese technical terms Out of the total 65M occurrences of technical term pairs 62M were replaced with technical term tokens using the phrase translation table while the remaining 300K were replaced with technical term tokens using the word alignment We limited both the Japanese vocabulary the source language and the Chinese vocabulary the target language to 40K most frequently used words Within the total 1000 Japanese patent sentences in the test set 2244 occurrences of Japanese technical terms were identified which correspond to 1857 types For the training of the SMT model including the word alignment and the phrase translation table we used Moses BIBREF9 a toolkit for a phrasebased SMT models For the training of the NMT model our training procedure and hyperparameter choices were similar to those of Sutskever et al Sutskever14 We used a deep LSTM neural network comprising three layers with 512 cells in each layer and a 512dimensional word embedding Similar to Sutskever et al 2014 we reversed the words in the source sentences and ensure that all sentences in a minibatch are roughly the same length Further training details are given below All of the LSTMs parameter were initialized with a uniform distribution ranging between 006 and 006 We set the size of a minibatch to 128 We used the stochastic gradient descent beginning at a learning rate of 05 We computed the perplexity of the development set using the currently produced NMT model after every 1500 minibatches were trained and multiplied the learning rate by 099 when the perplexity did not decrease with respect to the last three perplexities We trained our model for a total of 10 epoches Similar to Sutskever et al Sutskever14 we rescaled the normalized gradient to ensure that its norm does not exceed 5 We implement the NMT system using TensorFlow an open source library for numerical computation The training time was around two days when using the described parameters on an 1GPU machine We calculated automatic evaluation scores for the translation results using two popular metrics BLEU BIBREF10 and RIBES BIBREF11 As shown in Table 1 we report the evaluation scores on the basis of the translations by Moses BIBREF9 as the baseline SMT and the scores based on translations produced by the equivalent NMT system without our proposed approach as the baseline NMT As shown in Table 1 the two versions of the proposed NMT systems clearly improve the translation quality when compared with the baselines When compared with the baseline SMT the performance gain of the proposed system is approximately 31 BLEU points if translations are produced by the proposed NMT system of Section NMT Rescoring of 1000best SMT Translations or 23 RIBES points if translations are produced by the proposed NMT system of Section NMT Decoding and SMT Technical Term Translation When compared with the result of decoding with the baseline NMT the proposed NMT system of Section NMT Decoding and SMT Technical Term Translation achieved performance gains of 08 RIBES points When compared with the result of reranking with the baseline NMT the proposed NMT system of Section NMT Rescoring of 1000best SMT Translations can still achieve performance gains of 06 BLEU points Moreover when the output translations produced by NMT decoding and SMT technical term translation described in Section NMT Decoding and SMT Technical Term Translation with the output translations produced by decoding with the baseline NMT the number of unknown tokens included in output translations reduced from 191 to 92 About 90 of remaining unknown tokens correspond to numbers English words abbreviations and symbols In this study we also conducted two types of human evaluation according to the work of Nakazawa et al Nakazawa15 pairwise evaluation and JPO adequacy evaluation During the procedure of pairwise evaluation we compare each of translations produced by the baseline SMT with that produced by the two versions of the proposed NMT systems and judge which translation is better or whether they are with comparable quality The score of pairwise evaluation is defined by the following formula where W is the number of better translations compared to the baseline SMT L the number of worse translations compared to the baseline SMT and T the number of translations having their quality comparable to those produced by the baseline SMT score100 times fracWLWLT nonumber Eq 34 The score of pairwise evaluation ranges from 100 to 100 In the JPO adequacy evaluation Chinese translations are evaluated according to the quality evaluation criterion for translated patent documents proposed by the Japanese Patent Office JPO The JPO adequacy criterion judges whether or not the technical factors and their relationships included in Japanese patent sentences are correctly translated into Chinese and score Chinese translations on the basis of the percentage of correctly translated information where the score of 5 means all of those information are translated correctly while that of 1 means most of those information are not translated correctly The score of the JPO adequacy evaluation is defined as the average over the whole test sentences Unlike the study conducted Nakazawa et al BIBREF12 we randomly selected 200 sentence pairs from the test set for human evaluation and both human evaluations were conducted using only one judgement Table 2 shows the results of the human evaluation for the baseline SMT the baseline NMT and the proposed NMT system We observed that the proposed system achieved the best performance for both pairwise evaluation and JPO adequacy evaluation when we replaced technical term tokens with SMT technical term translations after decoding the source sentence with technical term tokens Throughout Figure 5 sim Figure 7 we show an identical source Japanese sentence and each of its translations produced by the two versions of the proposed NMT systems compared with translations produced by the three baselines respectively Figure 5 shows an example of correct translation produced by the proposed system in comparison to that produced by the baseline SMT In this example our model correctly translates the Japanese sentence into Chinese whereas the translation by the baseline SMT is a translation error with several erroneous syntactic structures As shown in Figure 6 the second example highlights that the proposed NMT system of Section NMT Decoding and SMT Technical Term Translation can correctly translate the Japanese technical term laminated wafer to the Chinese technical term The translation by the baseline NMT is a translation error because of not only the erroneously translated unknown token but also the Chinese word which is not appropriate as a component of a Chinese technical term Another example is shown in Figure 7 where we compare the translation of a reranking SMT 1000best translation produced by the proposed NMT system with that produced by reranking with the baseline NMT It is interesting to observe that compared with the baseline NMT we obtain a better translation when we rerank the 1000best SMT translations using the proposed NMT system in which technical term tokens represent technical terms It is mainly because the correct Chinese translation wafter of Japanese word is out of the 40K NMT vocabulary Chinese causing reranking with the baseline NMT to produce the translation with an erroneous construction of noun phrase of noun phrase of noun phrase As shown in Figure 7 the proposed NMT system of Section NMT Rescoring of 1000best SMT Translations produced the translation with a correct construction mainly because Chinese word wafter is a part of Chinese technical term laminated wafter and is replaced with a technical term token and then rescored by the NMT model with technical term tokens TT1 TT2 ldots In this paper we proposed an NMT method capable of translating patent sentences with a large vocabulary of technical terms We trained an NMT system on a bilingual corpus wherein technical terms are replaced with technical term tokens this allows it to translate most of the source sentences except the technical terms Similar to Sutskever et al Sutskever14 we used it as a decoder to translate the source sentences with technical term tokens and replace the tokens with technical terms translated using SMT We also used it to rerank the 1000best SMT translations on the basis of the average of the SMT score and that of NMT rescoring of translated sentences with technical term tokens For the translation of Japanese patent sentences we observed that our proposed NMT system performs better than the phrasebased SMT system as well as the equivalent NMT system without our proposed approach One of our important future works is to evaluate our proposed method in the NMT system proposed by Bahdanau et al Bahdanau15 which introduced a bidirectional recurrent neural network as encoder and is the stateoftheart of pure NMT system recently However the NMT system proposed by Bahdanau et al Bahdanau15 also has a limitation in addressing outofvocabulary words Our proposed NMT system is expected to improve the translation performance of patent sentences by applying approach of Bahdanau et al Bahdanau15 Another important future work is to quantitatively compare our study with the work of Luong et al Luong15 In the work of Luong et al Luong15 they replace low frequency single words and translate them in a postprocessing Step using a dictionary while we propose to replace the whole technical terms and posttranslate them with phrase translation table of SMT system Therefore our proposed NMT system is expected to be appropriate to translate patent documents which contain many technical terms comprised of multiple words and should be translated together We will also evaluate the present study by reranking the nbest translations produced by the proposed NMT system on the basis of their SMT rescoring Next we will rerank translations from both the nbest SMT translations and nbest NMT translations As shown in Section Evaluation Results the decoding approach of our proposed NMT system achieved the best RIBES performance and human evaluation scores in our experiments whereas the reranking approach achieved the best performance with respect to BLEU A translation with the highest average SMT and NMT scores of the nbest translations produced by NMT and SMT respectively is expected to be an effective translation </s>
<s>[INST] Forex trading and Twitter: Spam, bots, and reputation manipulation [/INST] Currency trading Forex is the largest world market in terms of volume We analyze trading and tweeting about the EURUSD currency pair over a period of three years First a large number of tweets were manually labeled and a Twitter stance classification model is constructed The model then classifies all the tweets by the trading stance signal buy hold or sell EUR vs USD The Twitter stance is compared to the actual currency rates by applying the event study methodology wellknown in financial economics It turns out that there are large differences in Twitter stance distribution and potential trading returns between the four groups of Twitter users trading robots spammers trading companies and individual traders Additionally we observe attempts of reputation manipulation by post festum removal of tweets with poor predictions and deletingreposting of identical tweets to increase the visibility without tainting ones Twitter timeline Foreign exchange market Forex is a global decentralized market for trading with currencies The daily trading volume exceeds 5 trillion USD thus making it the largest market in the world In this paper we analyze three sources of data over a period of three years from January 2014 to December 2016 BIBREF0 We focus on potential missinformation spreading and manipulations on Twitter The main issue is What is the ground truth We address this problem by moving out of the social network system and by observing another financial market system Actual financial gains in the market provide clues to potential manipulations in the social network We relate both systems by applying and adapting the event study methodology BIBREF1 The currency announcements are events which are expected to influence the EURUSD exchange rate If the event signal buy hold or sell is properly recognized then some actual financial returns can be made in the hours or days after the event In contrast to classical event studies we categorize events on the basis of sentiment properly called stance of relevant Twitter users In our previous work we already analyzed the effects of Twitter stance on stock prices 30 stocks from the Dow Jones index BIBREF2 BIBREF3 We showed that the peaks of Twitter activity and their polarity are significantly correlated with stock returns In this paper we show that for certain classes of Twitter users returns after the events are statistically significant albeit small And we can also identify differences in returns after the potential manipulations of Twitter feed The paper is organized as follows In section SECREF2 we specify how the Forex tweets were collected a subset manually annotated and a stance classification model constructed Section SECREF3 provides simple rules to identify different classes of Twitter users such as trading robots spammers and actual traders We show that there are large differences in Twitter stance between these users Section SECREF4 describes the event study methodology in some detail as needed to understand the subsequent results We show significant differences in cumulative abnormal returns between the different user groups In section SECREF5 we address potential manipulations of the user Twitter feed with a tentative goal to improve herhis reputation and visibility We focus on the tweets that were deleted after we originally collected them and analyze different reasons for this post festum deletions We conclude with the ideas for further work and enhancements of the preliminary but promising results presented so far Tweets related to Forex specifically to EUR and USD were acquired through the Twitter search API with the following query EURUSD USDEUR EUR or USD In the period of three years January 2014 to December 2016 almost 15 million tweets were collected A subset of them 44000 tweets was manually labeled by knowledgeable students of finance The label captures the leaning or stance of the Twitter user with respect to the anticipated move of one currency wrt the other The stance is represented by three values buy EUR vs USD hold or sell The tweets were collected labeled and provided to us by the Sowa Labs company httpwwwsowalabscom The labeled tweets were generalized into a Twitter stance model For supervised learning variants of SVM BIBREF4 are often used because they are well suited for large scale text categorization are robust and perform well For Forex tweets we constructed a two plane SVM classifier BIBREF5 BIBREF6 The two plane SVM assumes the ordering of stance values and implements ordinal classification It consists of two SVM classifiers One classifier is trained to separate the buy tweets from the holdorsell tweets the other separates the sell tweets from the buyorhold tweets The result is a classifier with two hyperplanes that partitions the vector space into three subspaces buy hold or sell During classification the distances from both hyperplanes determine the predicted stance value The stance classifier was evaluated by 10fold blocked crossvalidation Since tweets are timeordered they should not be randomly selected into individual folds but retained in blocks of consecutive tweets BIBREF7 The results of performance evaluation are in Table TABREF5 Note that the F INLINEFORM0 measure considers just the buy and sell classes as is common in the threevalued sentiment classification evaluations BIBREF5 Different types of Twitter users have very different intentions regarding their impact and message they want to spread In recent years specially automated robots became increasingly influential To properly estimate the relation between the Forex market and tweetosphere it is important to focus on relevant Twitter users ie Forex trading companies and individual traders In related work it was already shown that bots exercise a profound impact on content popularity and activity on Twitter For example Gilani et al BIBREF8 implemented a simple bot detection mechanism based on click frequency and user agent strings To classify users into three categories organizations journalistsmedia bloggers and individuals De Choudhury et al BIBREF9 trained an automatic classifier An alternative approach is to detect communities in a retweet network eg BIBREF10 BIBREF11 It turns out that it is easy to identify Forex trading robots Their tweets INLINEFORM0 all start with one of the eighth patterns such as Closed Buy Sell stop The Forex Twitter users can then be classified into one of the four groups by the following simple rules where INLINEFORM0 indicates the daily activity of the user and INLINEFORM1 is the proportion of the user tweets that were retweeted by others Figure FIGREF10 shows the proportions of different Twitter user groups and their tweets in our dataset We can see that more than half of the users are individuals but that the trading robots produce by far the largest fraction of Forex tweets There are also considerable differences in the stance between different user groups Figure FIGREF11 shows that trading robots produce almost exclusively polarized tweets no hold tweets On the other hand spammers without robots are predominantly neutral relatively few buy or sell tweets The groups we focus on trading companies and individuals are more opinionated than spammers It is interesting that in their tweets the sell signal is prevailing probably due to the downward trend of EUR vs USD in the last three years An event study captures the impact of external events on the market returns External events that we consider here are the currency related announcements by the central banks FED and ECB and governments around 750 in the three years In an event study Cumulative Abnormal Return CAR is defined as a measure of return which exceed the overall market return Specifically The other essential component of an event study is determining the type of event in terms of its expected impact on the price In stock market typically Earnings Announcements are studied If an announcement exceeds prior expectations of analysts it is classified as positive and stock prices are expected to rise An event study combines announcements about several stocks over longer period of time and computes the average CARs in the days or hours after the announcements In our case we do not consider expectation of the analysts but instead use the stance of the Forex Twitter users regarding the EUR vs USD exchange rate We consider all tweets in one hour after the announcement and aggregate their stance to categorize the event Then we compute the CARs for up to one day after the event at one minute resolution If Twitter stance correctly predicts the exchange rate movement then there should be some tangible returns CARs in the hours after the event Figure FIGREF15 shows returns aggregated over all 750 events for different Twitter user groups The expected result is visible for trading companies bottomleft chart For buy events we buy EUR at time 0 CARs are positive return is around 01 small but significant for sell events we sell EUR at time 0 CARs are negative and for hold events no transaction CARs are around zero Similar results are obtained for individual traders bottomright chart but the separation of events is not as clear as for trading companies On the other hand trading robots and spam users top two charts in Figure FIGREF15 show no useful correlation between the Twitter stance and CARs As a consequence we conclude that it is important to properly identify them and eliminate their tweets from any trading strategy based on Twitter Here we focus on another aspect of Twitter misuse for potential manipulation post festum deletion of tweets by the Twitter user What are the reasons for users to delete their tweets Previous research addressed prediction of malicious or deleted tweets BIBREF12 BIBREF13 BIBREF14 and identification of deleted and suspicious accounts BIBREF15 On one hand some authors show that typos and rephrasing are among the major causes for deleting tweets BIBREF13 On the other hand other authors found that in deleted tweets a significantly higher fraction of the vocabulary consists of swear words and markers that indicate anger anxiety and sadness BIBREF16 We verified which of the tweets that were collected during the three years in near real time still exist It turns out that in our dataset 47 689658 posts were post festum deleted by the users Different user groups exhibit different patterns of deletion A histogram in Figure FIGREF16 shows fractions of tweets deleted by different user groups The majority of users do not delete their own tweets at all peak at 0 At the other extreme 100 there is about 5 of the users who deleted their accounts and all their tweets But the really interesting are the trading companies where only one third of them does not delete tweets and more than half of them delete up to 10 of their tweets We focus on the deleted tweets by trading companies and individual traders and search for signs of reputation manipulations A breakdown of deleted tweets for both groups in terms of different stances is in Table TABREF17 One reason for companies and individuals to delete their tweets might be to create an image of their capabilities to predict the market For example one can post two contradictory tweets at the same time EUR will go up and EUR will go down After the market shows the actual EUR move the incorrect prediction is deleted and the users timeline shows his forecasting insight We compare the results of the event study before and after the tweets were deleted Figure FIGREF19 shows CARs for trading companies and individual traders after removing their deleted tweets At this point we can report only negative results ie there is no increase of CARs and the hold events are further away from the zero line than in Figure FIGREF15 We analyze deleted tweets of 189 out of 195 Twitter users categorized as trading companies that have active Twitter accounts by deleting an account all the tweets from that account are also deleted The 189 companies deleted 3741 tweets Among them four deleted all Forex related tweets from their profile while the accounts are still active 8 users deleted between 10 and 40 of their tweets 33 users deleted between 1 and 5 of their tweets and only 68 did not delete any tweets The deleting behaviour of trading companies is shown in Figure FIGREF21 Note that the majority 76 of the trading companies deleted less than 1 of their tweets Note also that there are no trading companies that delete between 5 and 10 of their tweets We analyze the deleted tweets and focus on criteria that might indicate reputation manipulation Out of the 3741 deleted tweets 3611 are unique same author and identical text while 130 tweets are deleted more than once An extreme case is a tweet advertising easy and safe profit which is deleted 46 times same author and identical text The deleting and reposting of identical tweets is one form of increasing visibility without tainting the authors Twitter timeline A tweet that is deleted and posted again appears several times in the users followers feed while it appears just once in the authors timeline This can be therefore considered a kind of reputation manipulation Out of the 93 tweets that were deleted and reposted 50 were deleted and reposted once while the rest were deleted and reposted several times The 746 recommendation tweets that were deleted afterward point to a potential reputation manipulation by deleting the bad recommendations The breakdown of deleted tweets is shown in Figure FIGREF22 One of the major reasons to delete tweets are typos and rephrasing BIBREF13 In these cases a very similar tweet to the deleted tweet is posted again We check for each of the 3575 tweets that were deleted once and not reposted if they were deleted due to a typo We define typo as a reason of tweet deletion if the tweet is posted by the same author within the three next tweets after the deleted one with a very similar text INLINEFORM0 Levenshtein distance INLINEFORM1 and the difference is not in the URLs present in the tweet We found that 122 deleted tweets were reposted with changes so small that indicate typos Another category of deleted tweets are retweets If retweets are deleted it is usually because the original tweets were deleted In our dataset 406 retweets are deleted We check the remaining 3437 tweets for the use of vocabulary specific for trading long short bear bull bearish bullish resistance support buy sell close We identify 746 tweets that are recommendations for trading manually confirmed This is another kind of possible reputation manipulation a tweet with recommendation is posted and afterwards if the recommendation turns out to be spurious the tweet is deleted The authors Twitter timeline then falsely appears as if following his recommendations would yield profit We inspect a specific Twitter account from the category trading companies that posted more than 500 tweets and deleted between 10 and 40 of them The identity of the account cannot be revealed due to the privacy issues The tweets deleted fall into the following categories Reposts 91 60 of them are advertisements eg subscribe for analysis Links to recommendations 17 Recommendations 11 Retweet 1 if the original tweet is deleted retweets are also deleted We manually checked each of the 11 recommendations that were deleted In all the cases the recommendations turned out to be bad ie an investor would loose money An anonymized example of a bad recommendation post is the following usermention while daily candle is above 1xyz we are bullish on EURUSD while in the actual Forex market EUR went down This user used both types of reputation manipulation deleting poor recommendations and deletingreposting of identical tweets to increase their visibility The percentage of deleted poor predictions is small compared to all the deleted tweets and compared to all the posted tweets We speculate that the manipulation by tweet deletion needs to be subtle to go unnoticed by the users followers However even a subtle reputation burst in a domain as competitive as Forex trading can bring major benefits to the deceptive user This is an initial study of potential misuses of Twitter to influence the public interested in Forex trading We identify different types of Twitter accounts that are posting tweets related to the EURUSD currency exchange We show that there are considerable differences between them in terms of Twitter stance distribution and CARs If we eliminate trading robots and spam we find significant correlations between the Twitter stance and CARs the returns are small but the Forex market has very low trading costs The remaining posts come from the Forex trading companies and individual traders We further analyze the reasons for post festum deleting of tweets Some reasons are harmless such as correcting typos but some show indications of reputation busting We consider this a promising direction for further more indepth analysis The authors acknowledge financial support from the H2020 FET project DOLFINS grant no 640772 and the Slovenian Research Agency research core funding no P2103 </s>
<s>[INST] Unsupervised Text Summarization via Mixed Model Back-Translation [/INST] Machine summarization systems have made significant progress in recent years especially in the domain of news text This has been made possible among other things by the popularization of the neural sequencetosequence seq2seq paradigm BIBREF0 BIBREF1 BIBREF2 the development of methods which combine the strengths of extractive and abstractive approaches to summarization BIBREF3 BIBREF4 and the availability of large training datasets for the task such as Gigaword or the CNNDaily Mail corpus which comprise of over 38M shorter and 300K longer articles and aligned summaries respectively Unfortunately the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms In this work we investigate the application of backtranslation to training a summarization system in an unsupervised fashion from unaligned full text and summaries corpora Backtranslation has been successfully applied to unsupervised training for other sequence to sequence tasks such as machine translation BIBREF5 or style transfer BIBREF6 We outline the main differences between these settings and text summarization devise initialization strategies which take advantage of the asymmetrical nature of the task and demonstrate the advantage of combining varied initializers Our approach outperforms the previous stateoftheart on unsupervised text summarization while using less training data and even matches the rouge scores of recent semisupervised methods BIBREF7s work on applying neural seq2seq systems to the task of text summarization has been followed by a number of works improving upon the initial model architecture These have included changing the base encoder structure BIBREF8 adding a pointer mechanism to directly reuse input words in the summary BIBREF9 BIBREF3 or explicitly preselecting parts of the full text to focus on BIBREF4 While there have been comparatively few attempts to train these models with less supervision autoencoding based approaches have met some success BIBREF10 BIBREF11 BIBREF10s work endeavors to use summaries as a discrete latent variable for a text autoencoder They train a system on a combination of the classical loglikelihood loss of the supervised setting and a reconstruction objective which requires the full text to be mostly recoverable from the produced summary While their method is able to take advantage of unlabelled data it relies on a good initialization of the encoder part of the system which still needs to be learned on a significant number of aligned pairs BIBREF11 expand upon this approach by replacing the need for supervised data with adversarial objectives which encourage the summaries to be structured like natural language allowing them to train a system in a fully unsupervised setting from unaligned corpora of full text and summary sequences Finally BIBREF12 uses a general purpose pretrained text encoder to learn a summarization system from fewer examples Their proposed MASS scheme is shown to be more efficient than BERT BIBREF13 or Denoising AutoEncoders DAE BIBREF14 BIBREF15 This work proposes a different approach to unsupervised training based on backtranslation The idea of using an initial weak system to create and iteratively refine artificial training data for a supervised algorithm has been successfully applied to semisupervised BIBREF16 and unsupervised machine translation BIBREF5 as well as style transfer BIBREF6 We investigate how the same general paradigm may be applied to the task of summarizing text Let us consider the task of transforming a sequence in domain A into a corresponding sequence in domain B eg sentences in two languages for machine translation Let mathcal DA and mathcal DB be corpora of sequences in A and B without any mapping between their respective elements The backtranslation approach starts with initial seq2seq models f0A rightarrow B and f0B rightarrow A which can be handcrafted or learned without aligned pairs and uses them to create artificial aligned training data Let mathcal S denote a supervised learning algorithm which takes a set of aligned sequence pairs and returns a mapping function This artificial data can then be used to train the next iteration of seq2seq models which in turn are used to create new artificial training sets A and B can be switched here The model is trained at each iteration on artificial inputs and real outputs then used to create new training inputs Thus if the initial system isnt too far off we can hope that training pairs get closer to the true data distribution with each step allowing in turn to train better models In the case of summarization we consider the domains of full text sequences mathcal DF and of summaries mathcal DS and attempt to learn summarization fFrightarrow S and expansion fSrightarrow F functions However contrary to the translation case mathcal DF and mathcal DS are not interchangeable Considering that a summary typically has less information than the corresponding full text we choose to only define initial Frightarrow S models We can still follow the proposed procedure by alternating directions at each step To initiate their process for the case of machine translation BIBREF5 use two different initialization models for their neural NMT and phrasebased PBSMT systems The former relies on denoising autoencoders in both languages with a shared latent space while the latter uses the PBSMT system of BIBREF17 with a phrase table obtained through unsupervised vocabulary alignment as in BIBREF18 While both of these methods work well for machine translation they rely on the input and output having similar lengths and information content In particular the statistical machine translation algorithm tries to align most input tokens to an output word In the case of text summarization however there is an inherent asymmetry between the full text and the summaries since the latter express only a subset of the former Next we propose three initialization systems which implicitly model this information loss Full implementation details are provided in the Appendix The first initialization is similar to the one for PBSMT in that it relies on unsupervised vocabulary alignment Specifically we train two skipgram word embedding models using fasttext BIBREF19 on mathcal DF and mathcal DS then align them in a common space using the Wasserstein Procrustes method of BIBREF18 Then we map each word of a full text sequence to its nearest neighbor in the aligned space if their distance is smaller than some threshold or skip it otherwise We also limit the output length keeping only the first N tokens We refer to this function as fFrightarrow StextPrThr 0 Similarly to both BIBREF5 and BIBREF11 we also devise a starting model based on a DAE One major difference is that we use a simple BagofWords BoW encoder with fixed pretrained word embeddings and a 2layer GRU decoder Indeed we find that a BoW autoencoder trained on the summaries reaches a reconstruction rougel fscore of nearly 70 on the test set indicating that word presence information is mostly sufficient to model the summaries As for the noise model for each token in the input we remove it with probability p2 and add a word drawn uniformly from the summary vocabulary with probability p The BoW encoder has two advantages First it lacks the other models bias to keep the word order of the full text in the summary Secondly when using the DBAE to predict summaries from the full text we can weight the input word embeddings by their corpuslevel probability of appearing in a summary forcing the model to pay less attention to words that only appear in mathcal DF The Denoising BagofWords AutoEncoder with input reweighting is referred to as fFrightarrow StextDBAE 0 We also propose an extractive initialization model Given the same BoW representation as for the DBAE function ftheta mu s v predicts the probability that each word v in a full text sequence s is present in the summary We learn the parameters of ftheta mu by marginalizing the output probability of each word over all full text sequences and matching these firstorder moments to the marginal probability of each words presence in a summary That is let mathcal VS denote the vocabulary of mathcal DS then forall v in mathcal VS We minimize the binary crossentropy BCE between the output and summary moments We then define an initial extractive summarization model by applying ftheta mu cdot cdot to all words of an input sentence and keeping the ones whose output probability is greater than some threshold We refer to this model as fFrightarrow Smathbf mu 1 0 We apply the backtranslation procedure outlined above in parallel for all three initialization models For example fFrightarrow Smathbf mu 1 0 yields the following sequence of models and artificial aligned datasets Finally in order to take advantage of the various strengths of each of the initialization models we also concatenate the artificial training dataset at each odd iteration to train a summarizer eg We validate our approach on the Gigaword corpus which comprises of a training set of 38M article headlines considered to be the full text and titles summaries along with 200K validation pairs and we report test performance on the same 2K set used in BIBREF7 Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping we also further split the training set into 2M examples for which we only use titles and 18M for headlines All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq BIBREF20 Artificial data generation uses top15 sampling with a minimum length of 16 for full text and a maximum length of 12 for summaries rouge scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match BIBREF11 Table TABREF9 compares test ROUGE for different initialization models as well as the trivial Lead8 baseline which simply copies the first 8 words of the article We find that simply thresholding on distance during the word alignment step of PrThr does slightly better then the full PBSMT system used by BIBREF5 Our BoW denoising autoencoder with word reweighting also performs significantly better than the full seq2seq DAE initialization used by BIBREF11 PreDAE The momentsbased initial model mathbf mu 1 scores higher than either of these with scores already close to the full unsupervised system of BIBREF11 In order to investigate the effect of these three different strategies beyond their rouge statistics we show generations of the three corresponding first iteration expanders for a given summary in Table TABREF1 The unsupervised vocabulary alignment in PrThr handles vocabulary shift especially changes in verb tenses summaries tend to be in the present tense but maintains the word order and adds very little information Conversely the mathbf mu 1 expansion function which is learned from purely extractive summaries reuses most words in the summary without any change and adds some new information Finally the autoencoder based DBAE significantly increases the sequence length and variety but also strays from the original meaning more examples in the Appendix The decoders also seem to learn facts about the world during their training on article text EDFGDF is Frances public power company Finally Table TABREF13 compares the summarizers learned at various backtranslation iterations to other unsupervised and semisupervised approaches Overall our system outperforms the unsupervised Adversarialreinforce of BIBREF11 after one backtranslation loop and most semisupervised systems after the second one including BIBREF12s MASS pretrained sentence encoder and BIBREF10s Forcedattention Sentence Compression FSC which use 100K and 500K aligned pairs respectively As far as backtranslation approaches are concerned we note that the model performances are correlated with the initializers scores reported in Table TABREF9 iterations 4 and 6 follow the same pattern In addition we find that combining data from all three initializers before training a summarizer system at each iteration as described in Section SECREF8 performs best suggesting that the greater variety of artificial full text does help the model learn In this work we use the backtranslation paradigm for unsupervised training of a summarization system We find that the model benefits from combining initializers matching the performance of semisupervised approaches </s>
<s>[INST] Putting Self-Supervised Token Embedding on the Tables [/INST] Today most of businessrelated information is transmitted in an electronic form such as emails Therefore converting these messages into an easily analyzable representation could open numerous business opportunities as a lot of them are not used fully because of the difficulty to build bespoke parsing methods In particular a great number of these transmissions are semistructured text which doesnt necessarily follows the classic english grammar As seen in Fig 1 they can be under the form of tables containing diverse elements words and numbers afterwards referred to as tokens These tables are often implicitly defined which means that there are no special tags between what is or not part of the table or even between cells In these cases the structure is coming from space or tabs alignment and from the relative order of the tokens The data often are unlabeled which means that the content must be read with domainbased knowledge Thus automatic extraction of structured information is a major challenge because token candidates come in a variety of forms within a fuzzy context A high level of supervision is hard to obtain as manual labeling requires time that is hardly affordable when receiving thousands of such emails a day and even more so as databases can become irrelevant over time That is why training a generalizable model to extract these data should not rely on labeled inputs but rather on the content itself a paradigm called selfsupervised learning Many approaches already exist in Natural Language Processing such as PartofSpeech POS tagging or Named Entity Recognition NER but they do not take advantage of the semistructured data framework On the contrary there exists some information extraction algorithms applied to tables but they necessitate a great amount of manually defined rules and exceptions Our model aims to reconcile both approaches for an efficient and totally selfsupervised take on information extraction in the particular context of semistructured data In this paper we present a neural architecture for token embedding in plaintext tables which provides a useful lowerdimensional representation for tasks such as unsupervised or semisupervised clustering Intuitively tokens with a similar meaning should be close in the feature space to ease any further information extraction Our model aims to combine the better of the context and the character composition of each token and that is why the neural architecture is designed to learn both context and characterlevel representations simultaneously Finally we can take advantage of the distances between tokens in the feature space to create proper tables from fuzzy input data The field of Information Extraction on SemiStructured Data has been particularly active in the 1990s and the early 2000s developed in settings such as the Message Understanding Conferences MUCs and more recently in the ICDAR 2013 Table Competition BIBREF0 A very complete survey of information extraction in tables can be found in BIBREF1 and in BIBREF2 The main goal of systems such as BIBREF3 BIBREF4 or TINTIN BIBREF5 is to detect tables in messages or to label lines such as captions using the density of blank spaces Conditional Random Fields or Hidden Markov Models respectively This also has been done more recently in an unsupervised manner by BIBREF6 and BIBREF7 Obviously the main goal is to extract the content of these tables which is done by BIBREF8 BIBREF9 BIBREF10 BIBREF11 with DEByE BIBREF12 DIPRE BIBREF13 or WHISK BIBREF14 by learning patterns to match to the data systematically using manually defined rules and trying to generalize them as much as possible A very thorough panorama of this class of algorithms is presented in BIBREF15 More recently BIBREF16 proposes a graph structure in tables to match predefined patterns Unfortunately these methods are not flexible enough to be used in the case of a great number of patterns in the data and need user supervision or gazetteers to work properly which are not always available The idea of our model can certainly be related the most with BIBREF17 or BIBREF18 but we add in new Natural Language Processing tools and neural networks among other differences In recent years neural networks have replaced handcrafted features in Natural Language Processing with excellent results a recent survey of the topic can be found in BIBREF19 The seminal paper of Collobert et al BIBREF20 presents a first idea of token embeddings or word features vectors based on lookup tables in a fixed vocabulary and using neural networks It also brings a general solution to problems such as Part of Speech POS Chunking and Named Entity Recognition NER The work on word features vectors continued with the classic Word2Vec paper BIBREF21 which is now one of the references on the topic introducing the skipgram model for text There the method used to train the network is trying to predict the next words in a sentence based on surrounding ones However a problem of these approaches are that they rely on a dictionary of words and that outofvocabulary words such as orthographic errors get a generic representation In problems such as information extraction that is a major issue because the content consists mostly in names that are not classic words and can evolve in time Besides closely related words such as even and uneven should be close in the feature space which is not guaranteed by these methods That is why recently the focus has shifted on a study directly on the characters that mostly solve these questions Examples can be found in BIBREF22 and BIBREF23 with LSTMs or in BIBREF24 BIBREF25 and BIBREF26 with Convolutional Networks Further developments presented in BIBREF27 and BIBREF28 aim to learn vector representations of sentences or documents instead of limiting the models to the words only This is done with the same methods used to get words representations only with whole rows or paragraphs as the input These are our main inspirations but all these algorithms have been created to deal with natural and not semistructured text so they do not take advantage of the bidimensional structure of the data An effort worth noting is BIBREF29 with the introduction of Multidimensional Recurrent Neural Networks in the Optical Character Recognition OCR field but the idea has not been developed further We will now present the SC2T SelfSupervised Character and Contextlevels on Tables embedding As in BIBREF23 two important ideas guide our neural network architecture to correctly represent a token we need to take into account its composition a number a word as well as its context the surrounding tokens As we deal with tokens that mostly are not words in the classic sense of the term but abbreviations numbers unique identifiers and that we have no dictionary we cant use wordlevel features similar to what was done in BIBREF21 Thats why we will use characterlevel representations in the same fashion that BIBREF23 BIBREF24 BIBREF22 or BIBREF26 We do not use external dictionary or gazetteers which allows our program to be relevant on any semistructured text Note that given raw text as input the first stage is the tokenization of the data A discussion on that topic is complex and beyond the scope of this paper as special rules have to be applied depending on the data and pertinent segmentation Our architecture is created to learn a character and contextsensitive embedding of tokens To build this distributed representation we train our network on a proxy task which is to reconstruct tokens using only the surrounding ones an idea recalling autoencoders By surrounding we mean that are contained in a horizontal window of size hw and a vertical window of size vw around it padding with zeros if necessary This method resembles what is done in BIBREF20 or BIBREF28 for example but takes advantage of the 2D structure of the data Selecting tokens which are horizontally adjacent is trivial contrary to vertical ones Papers such as BIBREF30 and BIBREF31 give good insights on how to define that efficiently However for simplicity reasons we take the tokens of the surrounding lines which rightmost character is closest to the rightmost character of our target token Each of these surrounding tokens is first transformed in a onehot encoding on the characters of dimensionality d padded left with blank spaces to achieve the length lt for all tokens Then they all pass in the same characterlevel convolutional network ChNN which structure is inspired by BIBREF26 It is composed of a onehotencoding then fully connected FC layer then of two onedimensional CNNs with nf 64 filters of size 3 with a maxpooling Finally a fully connected layer is added to bring the embedding to the desired size ReLU activations batch normalization and 25 dropout are also placed between each layer A diagram of this network can be found in Fig 2 The resulting embeddings are then concatenated and fed into the horizontal HNN and vertical VNN context networks that have the same structure as the characterlevel network excepted the input size and that the maxpooling and FC layer is replaced by a simple Flatten layer They are kept separate from each other because they are not aimed to learn the same relationships in the data Then their outputs are merged and passed through two fully connected layers LNN the last of them of size se Thus we have two useful representations for a given token the output from the LNN network of size se plus the output taken directly from the character CNN on the token itself of size che We then concatenate and feed them to the last part of the network E which consists of two fully connected layers and whose final output is compared to the onehotencoding of the original token The concatenation is followed by a dropout layer to prevent the network to only use the input token A value of 05 yields the best results in our experience which confirms the idea presented in BIBREF23 Our model allows a simultaneous training of all the components in the network using backpropagation Finally our context and charactersensitive embedding is obtained by taking the output of the first FC layer in the E network which has size che se and we will see in the next part that it is indeed a useful distributed representation of tokens A diagram of our whole network can be found in Fig 3 We use CNNs in all the stages of our network instead of LSTMs or other layers for two reasons first in the case of tables the sequential aspect is often negligible Besides we implemented the same program with bidirectional LSTMs and it did not yield better results while slowing down the whole process This is a problem because speed of execution is an important factor in industrial applications treating tens of thousands of messages each day each containing hundreds or thousands of tokens An alternative to the previous model can be considered Indeed instead of letting the E network merge the character and context embeddings we could just concatenate them applying a constant importance coefficient K that has to be defined depending on the data Indeed if the different categories in the data are from different types eg textual names and numbers the character content has to be privileged unlike the case of more context dependent tokens eg numbers in a certain order Usually if the structure of the data is disrupted we will need to rely more on characters K will increase the weight of one part or another given that clustering algorithms put more importance on greater values in the data Obviously this coefficient K necessitates an intervention of the user and a knowledge of the data Thus it is not applicable in general but can be very efficient in particular cases as we will see in section IV Once we obtain our token embeddings a simple clustering algorithm such as kmeans BIBREF32 can be used to compute a clustering of the tokens Obtaining coherent groups of tokens can lead to many developments It can be used for manual labeling and bootstrapping quickly a labeled dataset for supervised learning but it can also be the basis of an efficient semisupervised algorithm We also need to cluster lines in the data indeed a message is often composed of one or multiples headers the data itself as well as disclaimers and signatures and more generally blocks of natural language in the document Once again their repartition or presence is not guaranteed so an adaptable clustering is necessary To obtain an embedding of the lines we simply compute a maxpooling of the embeddings of its tokens We used this method for separating headers disclaimers and table content by 3means clustering on our data To assess the efficiency of our embeddings we use them to label tokens in the Online Retail Data Set from UCI via kmeans clustering We chose it because this is a varied public dataset that fits the kind of problem we are dealing with Unfortunately the relevant Information Extraction papers we found sec Information Extraction on SemiStructured Data used either custom datasets or datasets that are not online anymore The Online Retail Data Set consists of a clean list of 25873 invoices totaling 541909 rows and 8 columns InvoiceNo CustomerID and StockCode are mostly 5 or 6digit integers with occasional letters Quantity is mostly 1 to 3digit integers a part of them being negative and UnitPrice is composed of 1 to 6 digits floating values InvoiceDate are dates all in the same format Country contains strings representing 38 countries and Description is 4224 strings representing names of products We reconstruct text mails from this data by separating each token with a blank space and stacking the lines for a given invoice grouped by InvoiceNo We will use the column label as ground truth for the tokens in the dataset For simplicity reasons we add underscores between words in Country and Description to ease the tokenization Another slight modification has to be done 25 of the CustomerId values are missing and we replace them by 00000 A sample can be found in Fig 4 We will now create an embedding of the tokens and use it in a kmeans clustering We will use the homogeneity score h as metrics which measures if all the data points that are members of a given cluster are given the same label It can be written h frac1ksum c 1kfracCc cap LcCc where Cc is the ensemble of data points in cluster c and Lc is the ensemble of data points that have the label which is most present in cluster c It represents the accuracy of a semisupervised clustering where the user simply gives a label to each cluster corresponding to the majority of its elements Obviously hrightarrow 1 when k tends to the number of data points However we will not restrain ourselves to taking k 8 the exact number of labels as varied data can have the same ground truth labels in a real setting For example 12242017 2017 or Dec24 could be all labeled as dates but might be difficult to group into one cluster That is why we do not consider the completeness score which measures if all the data points of a given class are elements of the same cluster as relevant in our case So a good measure of the quality of our clustering is the score reached for a certain number of clusters eg 20 or 100 which will represent the number of points that the user should label to obtain such accuracy Note that as kmeans yields stochastic results the results given here are a mean of 100 independent runs At first we have a simple problem all the lines follow the same pattern so a simple extraction rule can perfectly extract data This is a good baseline for our program as it should retrieve all the information Our experiment consists of creating homogeneous clusters according to the labels of the tokens after randomly deleting a portion of them Del andor replacing randomly a part of the characters CR heavy modifications that are not unlike those found in reallife settings An example of disrupted data can be found in Fig 5 Note that we only used a subset of 1000 invoices 24K lines or approximately 190K tokens which yielded slightly worse results compared to the tests we made on the whole dataset It is logical that the more the context is disrupted the more we will rely on the characters part We will present the results in two settings one with the model presented in The Architecture NoK the other one with the parameter K presented in Alternative Model K Best Char is the proportion of the norm of the character part of the embedding compared to the norm of the whole embedding which is controlled by variations of K Results of homogeneity depending on the number of clusters can be found in Table I nc being the number of clusters and our parameters in Table II We chose the horizontal window such as it takes into account the whole line but that could be unadapted in the case of very large tables 2gray15white Obviously the more disrupted the data the less accurate our model First we can see that the model with K is better than without in most cases but remember that the value of K has been crossvalidated to obtain the best possible result This is not realistic in general but can still be very useful when we have prior knowledge about the data For example we observe that without deletions and even with character replacements the context alone brings 100 accuracy reflecting that the position entirely determines the label When we randomly replace characters we cannot rely as much on them and numbers show that our model is more robust to a deletion of tokens than it is to character replacement probably because in our dataset tokens with the same label are often similar in composition It is also interesting to notice that our supervisionfree NoK model even if slightly disadvantaged in simple cases yields its best results when the data is more disrupted This is good news as it is in these cases that we have the least amount of prior knowledge besides being certainly the most realistic settings and the ones that need new models most Without surprise we noticed that it is often CustomerID InvoiceNo and to a lesser extent StockCode that are mislabeled due to their same composition Even in our most difficult case 50 deletion and 50 character replacement we obtain decent results in our unsupervised setting Overall with as few as 100 token labels out of 190K we could get a high clustering accuracy on most of our contexts The size of the embedding also had to be chosen carefully because it has to encode enough information while avoiding the curse of dimensionality Finally note that the network gets less training data when increasing the percentage of deletions and that we retrained it from scratch in each setting Often tables are not correctly aligned when data is missing which creates an erroneous display To correct this problem we can define a reference line that is the longest line that belongs to the table part according to the lines clustering This line will define the number of columns in our resulting table Then for every other line we try to match each token with a token from the reference line that is on its right ie the token which is closest in the embedding space while allowing the order to be kept We suppose here that the order is always preserved because in a given table permutations are very unlikely We then obtain correctly aligned tables as seen in Fig 6 which can be very useful for an easier labeling of the tokens This can be used even if there are different types of lines containing different information theses lines being separated beforehand by clustering as presented above in Tokens and Lines Clustering We then take different rows as references In this paper we present a new Neural Language model that jointly uses the character composition of tokens and their surrounding context in the particular framework of semistructured text data for the purpose of generating a distributed representation We have seen that the embeddings have linearized the space quite well such that a k means will gather similar tokens or by maxpooling them similar lines and that it could be applied to table realignment The approach presented here can already allow an information extraction system to function but it could be even more beneficial to add semisupervised learning algorithms as described in BIBREF33 or BIBREF34 Another solution would be to bootstrap large annotated databases for performing supervised learning We introduce several hyperparameters to be tuned mainly the sizes of our embeddings We want our model to stay as general and unsupervised as possible and we argue that tuning them manually is the better solution as existing unsupervised measures of the quality of a clustering Silhouette Coefficient BIBREF35 CalinskiHarabaz Index BIBREF36 can be misleading for our particular task Indeed they can favor less clusters that are not homogeneous in terms of labels instead of more cluster that are which is against our goal Finally the fact that we do not have relevant standards for this particular task is problematic However our dataset is openly available on the Internet link above and can be a simple but representative benchmark for papers to come We would like to thank Clement Laisn Hellebore Technologies for having developed convenient tools that greatly helped us in our research as well as all our colleagues for their support We also thank Caio Filippo Corro for discussions about this paper </s>
<s>[INST] English verb regularization in books and tweets [/INST] Human language reflects cultural political and social evolution Words are the atoms of language Their meanings and usage patterns reveal insight into the dynamical process by which society changes Indeed the increasing frequency with which electronic text is used as a means of communicating eg through email text messaging and social media offers us the opportunity to quantify previously unobserved mechanisms of linguistic development While there are many aspects of language being investigated towards an increased understanding of social and linguistic evolution BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 one particular area of focus has been on changes in past tense forms for English verbs BIBREF0 BIBREF1 BIBREF2 These investigations have collectively demonstrated that English verbs are going through a process of regularization where the original irregular past tense of a verb is replaced with the regular past tense formed using the suffix ed For example the irregular past tense of the verb burn is burnt and the regular past tense is burned Over time the regular past tense has become more popular in general and for some verbs has overtaken the irregular form For example in Fig UID1 we use the Google Ngram Online Viewer to compare the relative frequency of burnt with that of burned over the past 200 years As shown in an earlier paper involving two of the present authors BIBREF6 and expanded on below the Google Ngram dataset is highly problematic but can serve as a useful barometer of lexical change In the first half of the 19th century the irregular past tense burnt was more popular However the regular past tense burned gained in popularity and in the late 1800s became the more popular form which has persisted through to today Looking at several examples like this in a 2011 paper Michel et al studied the regularization of verbs along with other cultural and language trends as an accompaniment to their introduction of the Google Books Ngram corpus hereafter Ngrams and the protofield Culturomics BIBREF1 They found that most of the verb regularization over the last two centuries came from verbs using the suffix t for the irregular form and that British English texts were less likely than American English ones to move away from this irregular form In a 2007 study Lieberman et al explored the regularization of English verbs using the CELEX corpus which gives word frequencies from several textual sources BIBREF0 Focusing on a set of 177 verbs that were all irregular in Old English they examined how the rate of verb regularization relates to frequency of usage finding that more common verbs regularized at a slower rate They calculated halflives for irregular verbs binned by frequency finding that irregular verbs regularize with a halflife proportional to the square root of frequency of usage In a more recent study Newberry et al proposed a method for determining the underlying mechanisms driving language change including the regularization of verbs BIBREF2 Using the Corpus of Historical American English and inspired by ideas from evolution the authors described a method to determine if language change is due to selection or drift and applied this method to three areas of language change They used a null hypothesis of stochastic drift and checked if selection would be strong enough to reject this null hypothesis Of the 36 verbs Newberry et al studied only six demonstrated statistical support for selection They also claimed that rhyming patterns might be a driver of selection Unfortunately the corpora used in these studies have considerable limitations and corruptions For example early versions of the Ngrams data includes scientific literature whose explosive growth through the 20th century is responsible for the decreasing trend in relative word usage frequency observed in many common search terms BIBREF6 Moreover the librarylike nature of the corpus admits no accounting for popularity Lord of the Rings and an unknown work contribute with equal weight to token counts Another general concern with large corpora of a global language like English is that language use varies tremendously with culture and geography Ngrams allows only for the regional exploration of the English language with the British English corpus and the American English corpus Twitter data enables us to focus on much smaller spatial regions eg county or state Prior studies of verb regularization have also focused on data reflecting a formal editorial process such as the one undergone by any published book This editorial process will tend to normalize the language reflecting the linguistic opinions of a small minority of canon gatekeepers rather than portray the language used by everyday people For example maybe the irregular from of a particular verb is considered proper by scholars but a vast majority of the English speaking population uses the regular form While it is not a verb form one illustrative example is whom Although whom is the correct word to use in the objective case it is common for everyday speakers to use who In the present study we take tweets to be a closer representation of everyday language For the vast majority of accounts tweets are authored by individuals without undergoing a formal editing process As such the language therein should more accurately represent average speakers than what is found in books The demographic groups contributing to Twitter are by no means a carefully selected crosssection of society but do offer natural language use by the roughly 20 of adult English speakers who use Twitter BIBREF8 When exploring temporal changes in language use the Ngrams and CELEX datasets evidently cover a much longer period than the decade for which social media is available As a result we are unable to infer anything about the temporal dimension of regularization looking at Twitter In this paper we use the Ngrams and Twitter datasets to establish estimates of the current state of English verb regularization We structure our paper as follows In Sec SECREF2 we describe the datasets we use In Sec SECREF3 we present our results We study verb regularization in English in general in Sec UID5 We compare verb regularization in American English AE and British English BE using both Ngrams and geotagged Twitter data in Sec UID7 In Sec Description of data sets we employ methods to study regional variation in verb usage leveraging county level user location data in the United States We also explore correlations between verb regularization and a number of sociodemographic and economic variables Finally in Sec SECREF4 we provide concluding remarks To be consistent with prior work we chose the verb list for our project to match that of Michel et al BIBREF1 When comparing BE with AE we use the subset of verbs that form the irregular past tense with the suffix t When calculating frequencies or token counts for the past tense we use both the preterite and past participle of the verb See 1 for a complete tabulation of all verb forms The Ngrams data reflects relative frequency providing for a verb and a given year the percentage of corpus tokens that are the given verb where a token is an individual occurrence of a word The Google Ngram Online Viewer also has a smoothing parameter s which averages the relative frequency for the given year with that of each of the s years before and after the given year if they exist For example Fig UID1 uses a smoothing of 3 years and shows that averaged across the years 19972000 the value displayed for the year 2000 the word burned appeared with relative frequency 0004321 roughly once every 23000 tokens while burnt appeared with relative frequency 0000954 roughly once every 105000 tokens We downloaded the Ngrams verb data for the most recent 6year period available 20032008 BIBREF9 Specifically we chose the 2008 values of relative frequency with a smoothing of 5 years resulting in an average case insensitive1 For general English as suggested by BIBREF6 we queried the English Fiction 2012 corpus which uses books predominantly in the English language that a library or publisher identified as fiction For AE we used the American English 2012 corpus which uses books predominantly in the English language that were published in the United States For BE we used the British English 2012 corpus which uses books predominantly in the English language that were published in Great Britain BIBREF10 The Twitter messages for our project consist of a random sample of roughly 10 of all tweets posted between 9 September 2008 and 22 October 2017 This decahose dataset comprises a total of more than 106 billion messages sent by about 750 million unique accounts From this larger set we performed a caseinsensitive search for verb forms of interest also extracting geographic location when available in the metadata associated with each tweet Tweets geotagged by mobile phone GPS with a US location comprise about a 027 subset of the decahose dataset United Kingdom locations comprise about a 005 subset Many individuals provide location information entered as free text along with their biographical profile We matched user specified locations of the form city state to a US county when possible comprising a 226 subset of the decahose dataset Details on this matching process can be found in 1 For general English we counted the number of tokens in the decahose dataset for each verb For AE we used the tweets whose geotagged coordinates are located in the United States and for BE we used the tweets whose geotagged coordinates are located in the United Kingdom For the analysis of verbs by county we used the tweets with the user entered location information Table UID2 summarizes the datasets used for both Ngrams and Twitter The demographic data for US counties comes from the 2015 American Community Survey 5year estimates tables DP02Selected Social Characteristics DP03Selected Economic Characteristics DP04Selected Housing Characteristics and DP05Demographic and Housing Estimates which can be found by searching online at httpsfactfindercensusgov These tables comprise a total of 513 usable sociodemographic and economic variables We compute the regularization fraction for a verb as the proportion of instances in which the regular form was used for the past tense of the verb More specifically for Ngrams we divide the relative frequency for the regular past tense by the sum of the relative frequencies for the regular and irregular past tenses Similarly for Twitter we divide the token count for the regular past tense by the sum of the token counts for both the regular and irregular past tenses If the resulting regularization fraction is greater than 05 the regular past tense is more popular and we call the verb regular Otherwise we call the verb irregular When calculating an average regularization across all verbs we first compute the regularization fraction for each verb individually Then we compute the average of the regularization fractions with each verb contributing the same weight in the average irrespective of frequency We perform this average of averages to avoid swamping the contribution of less frequent verbs Using the datasets in row I of Table UID2 we begin by comparing Ngrams and Twitter with respect to regularization of English verbs in Fig UID3 where we find that 21 verbs are more regular in Ngrams and 85 are more regular on Twitter A Wilcoxon signed rank test of the data has a p value of 79times 106 demonstrating strong evidence that verbs on Twitter are more regular than verbs in Ngrams What mechanisms could be responsible for the observed increase in regularity on Twitter One possibility is that authors of fiction published in the 2000s along with their editors being professional users of English have a larger vocabulary than the typical user of Twitter If so their commitment to proper English would contribute to the appearance of relatively more irregular verbs in books The average Twitter user may not know or choose to use the correct past tense form of particular verbs and thus use the default regular past tense Another driver may be that nonnative English speakers writing English tweets may be more likely to use the default regular form We will find quantitative support for this mechanism below As a preview we note that Fig UID3 shows that burn is predominantly regular on Twitter globally but we see later Fig UID4 B that burn is irregular on Twitter for both American English and British English Thus it is likely that nonnative speakers are contributing to this difference We next study how verb regularization varies with geographic region In this subsection we use the datasets in row II of Table UID2 for AE and row III for BE and the subset of verbs that form the irregular past tense with the suffix t In Fig UID4 A we compare American and British English in Ngrams The average regularization fraction is 049 in AE and 042 in BE For 17 out of 22 verbs AE shows more regularization with a Wilcoxon signed rank test p value of 98times 104 giving statistical support that AE verbs are more regular on average in Ngrams than BE verbs As we show in the inset scatter plot of Fig UID4 A regularization in AE and BE are also strongly positively correlated with a Spearman correlation coefficient of 097 p23times 1014 Verbs that are more regular in AE are also more regular in BE just not to the same extent In Fig UID4 B we compare regularization in AE and BE on Twitter For Twitter the average regularization fraction is 054 for AE higher than Ngrams and 033 for BE much lower than Ngrams As with Ngrams 17 verbs out of 22 show more regularization in AE than in BE The Wilcoxon signed rank test gives a weaker but still significant p value of 19times 103 The inset in Fig UID4 B also shows a positive correlation although not as strong as Ngrams with a Spearman correlation coefficient of 087 p11times 107 Generally on Twitter regular AE verbs are also regular in BE but the difference in regularization fraction is much greater than for Ngrams In Fig UID6 A we demonstrate the difference in regularization between AE and BE for both Ngrams and Twitter The values in this figure for Ngrams can be thought of as for each verb in Fig UID4 A subtracting the value of the bottom bar from the top bar and likewise for Twitter and Fig UID4 B Positive numbers imply greater regularization in AE the more common scenario When the difference is near zero for one corpus it is usually close to zero for the other corpus as well However when Ngrams shows that AE is notably more regular than BE Twitter tends to show a much larger difference The average difference in regularization fraction between AE and BE for Twitter is 021 whereas it is only 008 for Ngrams Again we find that these averages are significantly different with a Wilcoxon signed rank p value of 19times 102 The inset scatter plot tells a similar story with a cluster of points near the origin As the difference in regularization fraction between regions increases in Ngrams it also tends to increase in Twitter with Spearman correlation coefficient 065 and p value 10times 103 The steep rise shows that the difference increases faster on Twitter than in Ngrams Fig UID6 B returns to comparing Ngrams and Twitter but now between AE and BE For each verb the bar chart shows the difference between the regularization fraction for Twitter and Ngrams in both AE and BE with positive values showing that regularization for Twitter is greater In this case the values can be thought of as subtracting the values for the bars in Fig UID4 A from the corresponding bars in Fig UID4 B As we find for English in general regularization is greater on Twitter than in Ngrams for AE with an average difference of 004 However for BE regularization is greater in Ngrams than on Twitter with an average difference in regularization fraction of 009 We summarize our findings in Table UID8 We found again that verbs on Twitter are more regular than in Ngrams for American English likely for many of the same reasons that verbs on Twitter are more regular than Ngrams in general However we find that in British English the opposite is true Verbs on Twitter are less regular than in Ngrams In decreasing order by average regularization fraction we have AE Twitter then AE Ngrams then BE Ngrams and finally BE Twitter Knowing that the general trend is towards regularization BIBREF1 BIBREF0 it seems that regularization is perhaps being led by everyday speakers of American English with American published work following suit but with a lag Then it may be that British English authors and editors are being influenced by American publications and the language used therein Indeed some studies have found a general Americanization of English across the globe BIBREF11 BIBREF12 meaning that the various varieties of English used across the world are becoming more aligned with American English Finally it may be that average British users of Twitter are more resistant to the change Indeed from the figures in the study by Gonalves et al one can see that the Americanization of British English is more pronounced in Ngrams than on Twitter BIBREF11 agreeing with what we have found here In Sec UID7 we demonstrated regional differences in verb regularization by comparing BE and AE Here we consider differences on a smaller spatial scale by quantifying regularization by county in the United States using the dataset in row IV of Table UID2 We use methods inspired by Grieve et al to study regional variation in language BIBREF13 We only include counties that had at least 40 total tokens for the verbs under consideration We plot the average regularization fraction for each county in the continental US in Fig Introduction A where counties with not enough data are colored black To control for the skewed distribution of samples associated with county population see below for more details we use residuals for this portion of the analysis After regressing with the log 10 of data volume total number of tokens for each county we compute the average regularization fraction residual which is plotted in Fig Introduction B That is if we let di be the total number of tokens for verbs in tweets from county i alpha and beta be the slope and intercept parameters computed from regression and Ri be the average regularization fraction for county i then we compute the average regularization fraction residual for county i ritextreg as ritextreg Ri leftbeta alpha log 10 di right Eq 34 Using the average regularization residual at the county level as input we measure local spatial autocorrelation using the GetisOrd Gi z score BIBREF14 Gi
frac sum j wij rjtextreg overlinertextregsum j wijsigma sqrtleftnsum j wij2 left sum j wijright2 rightn1 Eq 35 where sigma sqrt
fracsum j rjtextreg2 n
overlinertextreg2
Eq 36 overlinertextreg frac1nsum i ritextreg n is the number of counties and wij is a weight matrix To obtain the weight matrix used in this calculation we first create a distance matrix sij where the distance between each pair of counties is the larger of the great circle distance sijtextGC in miles between the centers of the bounding box for each county and 10 miles That is sijmax leftsijtextGC 10right Eq 37 We make the minimum value for sij 10 miles to prevent a county from having too large of a weight We then compute the weight matrix as wijfrac1sqrtsij Eq 38 Fig Introduction C shows the results for the lower 48 states where black represents counties left out because there was not enough data For each county the Gi z score computes a local weighted sum of the residuals rjtextreg for the surrounding counties and compares that to the expected value of that weighted sum if all the counties had exactly the average residual overlinertextreg as their value where the weighting is such that closer counties have a higher weight Areas that are darker blue positive z score belong to a cluster of counties that has higher regularization than average and those that are darker red negative z score belong to a cluster that has lower regularization than average So Fig Introduction C shows that in general western counties show less regularization than average and eastern counties show more except that the New England area is fairly neutral As usual the z score gives the number of standard deviations away from the mean For this we would do a two tail test for significance because we are looking for both high value and low value clusters For example a z score greater in magnitude than 196 is significant at the 05 level If we do a Bonferroni correction based on 3161 counties the number included for this part of the analysis then a z score greater in magnitude than 432 is significant for a two tail test at the 053161approx 158 times 105 level We do this same process looking at individual verbs as well However when looking at individual verbs we use the regularization fraction rather than residuals because the data skew is not as problematic This is because the main problem with data volume comes when averaging across verbs that have different frequencies of usage as explained below Also here we include counties that have at least 10 tokens Fig gives an example map showing the Gi z scores for the verb dream The maps showing local spatial autocorrelation for the complete list of verbs can be found in the Online Appendix A at For many of the counties in the US there is a small sample of Twitter data We restrict our analysis to counties with a total token count of at least 40 for the verbs we consider Even for the counties meeting this criteria the volume of data varies leading to drastically different sample sizes across counties More common verbs tend to have popular irregular forms eg found and won and less common verbs tend to be regular eg blessed and climbed BIBREF0 As a result samples taken from populous counties are more likely to contain less common verbs Our average regularization is an average of averages resulting in an underlying trend toward higher rates for more populous counties due to the increased presence of rarer regular verbs Fig UID17 demonstrates the relationship between data volume and regularization To explore the connection further we perform a synthetic experiment as follows To simulate sampling from counties with varying population sizes we first combine all verb token counts using the Twitter dataset from row I of Table UID2 into a single collection We then randomly sample a synthetic county worth of tokens from this collection For a set of 1000 logarithmically spaced county sizes we randomly draw five synthetic collections of verbs each is a blue circle in Fig UID17 For each sample we compute the average regularization fraction as we did for US counties The goal is to infer the existences of any spurious trend introduced by the sampling of sparsely observed counties The resulting simulated curve is comparable to the trend observed for actual US counties As the data volume increases the simulated version converges on roughly 017 which is the average regularization fraction for all of Twitter We also explored correlations between verb regularization and various demographic variables Fig UID17 showed a strong relationship between data volume and verb regularization It has been shown elsewhere that tweet density positively correlates with population density BIBREF15 and population size is correlated with many demographic variables As a result we use partial correlations as an attempt to control for the likely confounding effect of data volume For each demographic variable we compute the regression line between the log 10 of data volume di and regularization and compute the residuals as in Eq Methods and results Then if the demographic variable is an Estimate variable where the unit is number of people we similarly compute the regression line between the log 10 of data volume and the log 10 of the demographic variable1 and compute the residuals ritextdem as ritextdem log 10Di left delta gamma log 10 di right Eq 42 where Di is the value of the demographic variable for county i and gamma and delta are the slope and intercept parameters calculated during regression Otherwise the demographic variable is a Percent variable with units of percentage and we compute the regression line between the log 10 of data volume and the demographic variable and compute residuals as ritextdem Di left delta gamma log 10 di right Eq 44 The correlation between residuals ritextreg and ritextdem gives the partial correlation between average regularization and the demographic variable Our findings suggest that data volume is a confounding variable in at least some of the cases because after controlling for data volume there is generally a large decrease in the correlation between verb regularization and the demographic variables The largest in magnitude Pearson correlation between verb regularization and a demographic variable is 068 for the variable Estimate SCHOOL ENROLLMENT Population 3 years and over enrolled in school whereas the largest in magnitude partial correlation is only 018 for the variable Percent OCCUPATION Civilian employed population 16 years and over Management business science and arts occupations Table UID20 lists the 10 demographic variables with largest in magnitude partial correlation Fig UID18 shows an example for one of the demographic variables the Percent variable with largest simple correlation Fig UID18 A is the scatter plot of the demographic variable with average regularization which corresponds to simple correlation Fig UID18 B is the scatter plot of the residuals ritextdem and ritextreg after regressing with the log 10 of data volume and corresponds with partial correlation We can see that there is a strong simple correlation 052 but after accounting for data volume that correlation largely vanishes 015 Similar plots for all of the demographic variables can be found in the Online Appendix B at Our findings suggest that by and large verb regularization patterns are similar when computed with Ngrams and Twitter However for some verbs the extent of regularization can be quite different If social media is an indicator of changing patterns in language use Ngrams data ought to lag with a timescale not yet observable due to the recency of Twitter data Very reasonably Ngrams data may not yet be showing some of the regularization that is happening in everyday English We also found differences in verb regularization between American and British English but found that this difference is much larger on Twitter than Ngrams Overall and in American English specifically verbs are more regular on Twitter than in Ngrams but the opposite is true for British English In the US we also find variation in average verb regularization across counties Lastly we showed that there are significant partial correlations between verb regularization and various demographic variables but they tend to be weak Our findings do not account for the possible effects of spell checkers Some people when tweeting may be using a spell checker to edit their tweet If anything this will likely skew the language on Twitter towards the correct form used in edited textual sources For example in Fig UID3 we see that stand is irregular for both Ngrams and Twitter and likely most spell checkers would consider the regular standed a mistake but we see that stand is still over 100 times more regular on Twitter than in Ngrams So the differences between edited language and everyday language may be even larger than what we find here suggests Future work should look into the effects of spell checkers Our study explored the idea that edited written language may not fully represent the language spoken by average speakers However tweets do not of course fully represent the English speaking population Even amongst users our sampling is not uniform as it reflects the frequency with which different users tweet 1 Furthermore the language used on Twitter is not an unbiased sample of language even for people who use it frequently The way someone spells a word and the way someone pronounces a word may be different especially for example the verbs with an irregular form ending in t because t and ed are close phonetically However the fact that we found differences between the language of Ngrams and the language of Twitter suggests that the true language of everyday people is not fully represented by edited written language We recommend that future studies should investigate speech data We are thankful for the helpful reviews and discussions of earlier versions of this work by A Albright and J Bagrow and for help with initial data collection from L Gray PSD CMD were supported by NSF Grant No IIS1447634 and TJG PSD CMD were supported by a gift from MassMutual ccccr Regular 2cIrregular Verb Preterit Past Participle Preterit Past Participle Token Count continued Regular 2cIrregular Verb Preterit Past Participle Preterit Past Participle Token Count 5rContinued on next page A tabulation of all verb forms used in this study The Token Count column gives the sum of all the tokens for the past tense forms of the verb both regular and irregular in our Twitter dataset see row I of Table UID2 in Sec SECREF2 abide abided abode abode 146566 alight alighted alit alit 56306 arise arised arose arisen 164134 awake awaked awoke awoken awoke 423359 become becomed became become 50664026 begin beginned began begun 5942572 bend bended bent bent 4777019 beseech beseeched besought besought 3390 bleed bleeded bled bled 252225 blend blended blent blent 436029 bless blessed blest blest 22547387 blow blowed blew blown 9155246 break breaked broke broken 54506810 breed breeded bred bred 1040854 bring bringed brought brought 15303318 build builded built built 8521553 burn burned burnt burnt 7457942 buy buyed bought bought 24841526 catch catched caught caught 24891188 choose choosed chose chosen 10290205 clap clapped clapt clapt 405837 climb climbed clomb clom clomben 635122 cling clinged clung clung 49742 creep creeped crept crept 698405 deal dealed dealt dealt 1181974 dig digged dug dug 941656 dream dreamed dreamt dreamt 2794060 drink drinked drank drunk drank 37295703 drive drived drove driven 5745497 dwell dwelled dwelt dwelt 25725 eat eated ate eaten 25084758 fall falled fell fallen 25224815 fight fighted fought fought 3625297 find finded found found 80709195 flee fleed fled fled 405295 freeze freezed froze frozen 7454847 get getted got got gotten 500591203 give gived gave given 58697198 grow growed grew grown 17951273 hang hanged hung hung 3991956 hear heared heard heard 52605822 hide hided hidded hid hid hidden 7829276 hold holded held held 10080725 inlay inlayed inlaid inlaid 44811 keep keeped kept kept 11785131 kneel kneeled knelt knelt 83765 know knowed knew known 58175701 lay layed laid laid 5828898 leap leaped leapt leapt 91620 learn learned learnt learnt 18134586 lose losed lost lost 72695892 mean meaned meant meant 26814977 pay payed paid paid 21150031 plead pleaded pled pled 193553 ride rided rode ridden 1710109 seek seeked sought sought 888822 sell selled sold sold 14251612 send sended sent sent 26265441 shake shaked shook shaken 3223316 shoe shoed shod shod 47780 shrink shrinked shrank shrunk shrunk shrunken 296188 sing singed sang sung sung 6767707 sink sinked sank sunk sunk sunken 927419 slay slayed slew slain 2153981 sleep sleeped slept slept 9252446 slide slided slid slid 530659 sling slinged slung slung 38320 slink slinked slunk slunk 5772 smell smelled smelt smelt 1089814 smite smitted smited smote smitten smote 176768 sneak sneaked snuck snuck 797337 speak speaked spoke spoken 8502050 speed speeded sped sped 216062 spell spelled spelt spelt 3812137 spend spended spent spent 17603781 spill spilled spilt spilt 1627331 spin spinned spun spun 342022 spoil spoiled spoilt spoilt 3891576 spring springed sprang sprung sprung 626400 stand standed stood stood 3942812 steal stealed stole stolen 11884934 sting stinged stung stung 391053 stink stinked stank stunk stunk 1556197 stride strided strode stridden 17811 strike striked struck struck stricken 2167165 strip stripped stript stript 837967 strive strived strove striven 33705 swear sweared swore sworn 1902662 sweep sweeped swept swept 931245 swim swimmed swam swum 356842 swing swinged swung swung 295360 take taked took taken 83457822 teach teached taught taught 9379039 tear teared tore torn 4238865 tell telled told told 71562969 thrive thrived throve thriven 43612 throw throwed threw thrown 13197226 tread treaded trod trodden 56624 vex vexed vext vext 139411 wake waked woke woken 30796918 wear weared wore worn 8552191 weep weeped wept wept 200690 win winned won won 45276202 wind winded wound wound 1340267 wring wringed wrung wrung 29141 write writed wrote written writ wrote 23926025 To study regularization by county we extracted location information from the userprovided location information which was entered as free text in the users biographical profile To do this for each tweet we first checked if the location field was populated with text If so we then split the text on commas and checked whether there were two tokens separated by a comma If so we made the assumption that it might be of the form city state Then we used a python package called uszipcode which can be found here pythonhostedorguszipcode We used the packages method to search by city and state If the package returned a location match we used the returned latitude and longitude to determine which county the detected city belonged to The package allows for fuzzy matching meaning the city and state do not have to be spelled correctly and it allows for the state to be fully spelled out or be an abbreviation In the source code of the package there was a hard coded confidence level of 70 for the fuzzy matching We modified the source code so that the confidence level was an input to the method and running tests found we were satisfied with a confidence level of 91 We checked by hand the matches of 1000 tweets that this method returned a match for 100 from each year in the dataset and found the only potential error in these matches was when the user typed in Long Island NY or a similar variant For this the package returned Long Island City NY which is on Long Island but there are multiple counties on Long Island so the user may actually live in a different county None of the other 1000 tweets were inappropriately or ambiguously assigned </s>
<s>[INST] HuggingFace's Transformers: State-of-the-art Natural Language Processing [/INST] In the past 18 months advances on many Natural Language Processing NLP tasks have been dominated by deep learning models and more specifically the use of Transfer Learning methods BIBREF0 in which a deep neural network language model is pretrained on a webscale unlabelled text dataset with a generalpurpose training objective before being finetuned on various downstream tasks Following noticeable improvements using Long ShortTerm Memory LSTM architectures BIBREF1 BIBREF2 a series of works combining Transfer Learning methods with largescale Transformer architectures BIBREF3 has repeatedly advanced the stateoftheart on NLP tasks ranging from text classification BIBREF4 language understanding BIBREF5 BIBREF6 BIBREF7 machine translation BIBREF8 and zeroshort language generation BIBREF9 up to coreference resolution BIBREF10 and commonsense inference BIBREF11 While this approach has shown impressive improvements on benchmarks and evaluation metrics the exponential increase in the size of the pretraining datasets as well as the model sizes BIBREF5 BIBREF12 has made it both difficult and costly for researchers and practitioners with limited computational resources to benefit from these models For instance RoBERTa BIBREF5 was trained on 160 GB of text using 1024 32GB V100 On AmazonWebServices cloud computing AWS such a pretraining would cost approximately 100K USD Contrary to this trend the booming research in Machine Learning in general and Natural Language Processing in particular is arguably explained significantly by a strong focus on knowledge sharing and largescale community efforts resulting in the development of standard libraries an increased availability of published research code and strong incentives to share stateoftheart pretrained models The combination of these factors has lead researchers to reproduce previous results more easily investigate current approaches and test hypotheses without having to redevelop them first and focus their efforts on formulating and testing new hypotheses To bring Transfer Learning methods and largescale pretrained Transformers back into the realm of these best practices the authors and the community of contributors have developed Transformers a library for stateofthe art Natural Language Processing with Transfer Learning models Transformers addresses several key challenges Transformers gathers in a single place stateofthe art architectures for both Natural Language Understanding NLU and Natural Language Generation NLG with model code and a diversity of pretrained weights This allows a form of trainingcomputationcostsharing so that lowresource users can reuse pretrained models without having to train them from scratch These models are accessed through a simple and unified API that follows a classic NLP pipeline setting up configuration processing data with a tokenizer and encoder and using a model either for training adaptation in particular or inference The model implementations provided in the library follow the original computation graphs and are tested to ensure they match the original author implementations performances on various benchmarks Transformers was designed with two main goals in mind i be as easy and fast to use as possible and ii provide stateoftheart models with performances as close as possible to the originally reported results To ensure a low entry barrier the number of userfacing abstractions to learn was strongly limited and reduced to just three standard classes configuration models and tokenizers which all can be initialized in a simple and unified way by using a common frompretrained instantiation method There is a growing field of study sometimes referred as BERTology from BERT BIBREF13 concerned with investigating the inner working of largescale pretrained models and trying to build a science on top of these empirical results Some examples include BIBREF14 BIBREF15 BIBREF16 Transformers aims at facilitating and increasing the scope of these studies by i giving easy access to the inner representations of these models notably the hidden states the attention weights or heads importance as defined in BIBREF15 and ii providing different models in a unified API to prevent overfitting to a specific architecture and set of pretrained weights Moreover the unified frontend of the library makes it easy to compare the performances of several architectures on a common language understanding benchmark Transformers notably includes preprocessors and finetuning scripts for GLUE BIBREF6 SuperGLUE BIBREF7 and SQuAD11 BIBREF17 Transformers seeks a balance between sticking to the original authors codebase for reliability and providing clear and readable implementations featuring best practices in training deep neural networks so that researchers can seamlessly use the codebase to explore new hypothesis derived from these models To accommodate a large community of practitioners and researchers the library is deeply compatible with and actually makes compatible two major deep learning frameworks PyTorch BIBREF18 and TensorFlow from release 20 BIBREF19 Another essential question is how to make these advances in research available to a wider audience especially in the industry Transformers also takes steps towards a smoother transition from research to production The provided models support TorchScript a way to create serializable and optimizable models from PyTorch code and features production code and integration with the TensorFlow Extended framework The development of the Transformers originally steamed from opensourcing internals tools used at HuggingFace but has seen a huge growth in scope over its ten months of existence as reflected by the successive changes of name of the library from pytorchpretrainedbert to pytorchtransformers to finally Transformers A fastgrowing and active community of researchers and practitioners has gathered around Transformers The library has quickly become used both in research and in the industry at the moment more than 200 research papers report using the library Transformers is also included either as a dependency or with a wrapper in several popular NLP frameworks such as Spacy BIBREF20 AllenNLP BIBREF21 or Flair BIBREF22 Transformers is an ongoing effort maintained by the team of engineers and research scientists at HuggingFace with support from a vibrant community of more than 120 external contributors We are committed to the twin efforts of developing the library and fostering positive interaction among its community members with the ambition of creating the standard library for modern deep learning NLP Transformers is released under the Apache 20 license and is available through pip or from source on GitHub Detailed documentation along with onboarding tutorials are available on HuggingFaces website Transformers has been designed around a unified frontend for all the models parameters and configurations tokenization and model inference These steps reflect the recurring questions that arise when building an NLP pipeline defining the model architecture processing the text data and finally training the model and performing inference in production In the following section well give an overview of the three base components of the library configuration model and tokenization classes All of the components are compatible with PyTorch and TensorFlow starting 20 For complete details we refer the reader to the documentation available on httpshuggingfacecotransformers All the models follow the same philosophy of abstraction enabling a unified API in the library Configuration A configuration class instance usually inheriting from a base class PretrainedConfig stores the model and tokenizer parameters such as the vocabulary size the hidden dimensions dropout rate etc This configuration object can be saved and loaded for reproducibility or simply modified for architecture search The configuration defines the architecture of the model but also architecture optimizations like the heads to prune Configurations are agnostic to the deep learning framework used Tokenizers A Tokenizer class inheriting from a base class PreTrainedTokenizer is available for each model This class stores the vocabulary tokentoindex map for the corresponding model and handles the encoding and decoding of input sequences according to the models tokenizationspecific process ex BytePairEncoding SentencePiece etc Tokenizers are easily modifiable to add userselected tokens special tokens like classification or separation tokens or resize the vocabulary Furthermore Tokenizers implement additional useful features for the users by offering values to be used with a model these range from token type indices in the case of sequence classification to maximum length sequence truncating taking into account the added modelspecific special tokens most pretrained Transformers models have a maximum sequence length they can handle defined during their pretraining step Tokenizers can be instantiated from existing configurations available through Transformers originating from the pretrained models or created more generally by the user from userspecifications Model All models follow the same hierarchy of abstraction a base class implements the models computation graph from encoding projection on the embedding matrix through the series of selfattention layers and up to the last layer hidden states The base class is specific to each model and closely follows the original implementation allowing users to dissect the inner workings of each individual architecture Additional wrapper classes are built on top of the base class adding a specific head on top of the base model hidden states Examples of these heads are language modeling or sequence classification heads These classes follow similar naming pattern XXXForSequenceClassification or XXXForMaskedLM where XXX is the name of the model and can be used for adaptation finetuning or pretraining All models are available both in PyTorch and TensorFlow starting 20 and offer deep interoperability between both frameworks For instance a model trained in one of frameworks can be saved on drive for the standard library serialization practice and then be reloaded from the saved files in the other framework seamlessly making it particularly easy to switch from one framework to the other one along the model lifetime training serving etc Auto classes In many cases the architecture to use can be automatically guessed from the shortcut name of the pretrained weights eg bertbasecased A set of Auto classes provides a unified API that enable very fast switching between different modelsconfigstokenizers There are a total of four highlevel abstractions referenced as Auto classes AutoConfig AutoTokenizer AutoModel for PyTorch and TFAutoModel for TensorFlow These classes automatically instantiate the right configuration tokenizer or model class instance from the name of the pretrained checkpoints Optimizer The library provides a few optimization utilities as subclasses of PyTorch torchoptimOptimizer which can be used when training the models The additional optimizer currently available is the Adam optimizer BIBREF23 with an additional weight decay fix also known as AdamW BIBREF24 Scheduler Additional learning rate schedulers are also provided as subclasses of PyTorch torchoptimlrschedulerLambdaLR offering various schedules used for transfer learning and transformers models with customizable options including warmup schedules which are relevant when training with Adam In this section we present some of the major tools and examples provided in the library to experiment on a range of downstream Natural Language Understanding and Natural Language Generation tasks The language models provided in Transformers are pretrained with a general purpose training objective usually a variant of language modeling like standard sometime called causal language modeling as used for instance in BIBREF9 or masked language modeling as introduced in BIBREF13 A pretrained model is often evaluated using widerange language understanding benchmarks Transformers includes several tools and scripts to evaluate models on GLUE BIBREF6 and SuperGLUE BIBREF7 These two benchmarks gather a variety of datasets to evaluate natural language understanding systems Details of the datasets can be found in the Appendix on page SECREF7 Regarding the machine comprehension tasks the library feature evaluations on SQuAD11 BIBREF17 and SQuAD20 BIBREF25 Others currentlysupported benchmarks include SWAG BIBREF26 RACE BIBREF27 and ARC BIBREF28 Finetuning a language model on a downstream text corpus usually leads to significant gains for tasks on this corpus in particular when the domain is different domain adaptation It also significantly reduces the amount of training data required for finetuning on a target task in the target domain Transformers provides simple scripts to finetune models on custom text datasets with the option to add or remove tokens from the vocabulary and several other adaptability features Write with Transformer Because Natural Language Processing does not have to be serious and boring the generative capacities of autoregressive language models available in Transformers are showcased in an intuitive and playful manner Built by the authors on top of Transformers Write with Transformer is an interactive interface that leverages the generative capabilities of pretrained architectures like GPT GPT2 and XLNet to suggest text like an autocompletion plugin Generating samples is also often used to qualitatively and subjectively evaluate the generation quality of language models BIBREF9 Given the impact of the decoding algorithm topK sampling beamsearch etc on generation quality BIBREF29 Write with Transformer offers various options to dynamically tweak the decoding algorithm and investigate the resulting samples from the model Conversational AI HuggingFace has been using Transfer Learning with Transformerbased models for endtoend Natural language understanding and text generation in its conversational agent Talking Dog The company also demonstrated in fall 2018 that this approach can be used to reach stateoftheart performances on academic benchmarks topping by a significant margin the automatic metrics leaderboard of the Conversational Intelligence Challenge 2 held at the Thirtysecond Annual Conference on Neural Information Processing Systems NIPS 2018 The approach used to reach these performances is described in BIBREF30 BIBREF31 and the code and pretrained models based on the Transformers library are available online Using in production To facilitate the transition from research to production all the models in the library are compatible with TorchScript an intermediate representation of a PyTorch model that can then be run either in Python in a more efficient way or in a highperformance environment such as C Finetuned models can thus be exported to productionfriendly environment Optimizing large machine learning models for production is an ongoing effort in the community and there are many current engineering efforts towards that goal The distillation of large models eg DistilBERT BIBREF32 is one of the most promising directions It lets users of Transformers run more efficient versions of the models even with strong latency constraints and on inexpensive CPU servers We also convert Transformers models to Core ML weights that are suitable to be embbeded inside a mobile application to enable ontheedge machine learning Code is also made available Community Many libraries in NLP and Machine Learning have been created on top of Transformers or have integrated Transformers as a package dependency or through wrappers At the time of writing the authors have been mostly aware of FastBert FARM flair BIBREF22 BIBREF33 AllenNLP BIBREF21 and PyText but there are likely more interesting developments to be found from research and internal projects to production packages Here is a list of architectures for which reference implementations and pretrained weights are currently provided in Transformers These models fall into two main categories generative models GPT GPT2 TransformerXL XLNet XLM and models for language understanding Bert DistilBert RoBERTa XLM BERT BIBREF13 is a bidirectional Transformerbased encoder pretrained with a linear combination of masked language modeling and next sentence prediction objectives RoBERTa BIBREF5 is a replication study of BERT which showed that carefully tuning hyperparameters and training data size lead to significantly improved results on language understanding DistilBERT BIBREF32 is a smaller faster cheaper and lighter version BERT pretrained with knowledge distillation GPT BIBREF34 and GPT2 BIBREF9 are two large autoregressive language models pretrained with language modeling GPT2 showcased zeroshot task transfer capabilities on various tasks such as machine translation or reading comprehension TransformerXL BIBREF35 introduces architectural modifications enabling Transformers to learn dependency beyond a fixed length without disrupting temporal coherence via segmentlevel recurrence and relative positional encoding schemes XLNet BIBREF4 builds upon TransformerXL and proposes an autoregressive pretraining scheme combining BERTs bidirectional context flow with autoregressive language modeling by maximizing the expected likelihood over permutations of the word sequence XLM BIBREF8 shows the effectiveness of pretrained representations for crosslingual language modeling both on monolingual data and parallel data and crosslingual language understanding We systematically release the model with the corresponding pretraining heads language modeling next sentence prediction for BERT for adaptation using the pretraining objectives Some models finetuned on downstream tasks such as SQuAD11 are also available Overall more than 30 pretrained weights are provided through the library including more than 10 models pretrained in languages other than English Some of these nonEnglish pretrained models are multilingual models with two of them being trained on more than 100 languages The design of Transformers was inspired by earlier libraries on transformers and Natural Language Processing More precisely organizing the modules around three main components configuration tokenizers and models was inspired by the design of the tensor2tensor library BIBREF36 and the original code repository of Bert BIBREF13 from Google Research while concept of providing easy caching for pretrained models steamed from features of the AllenNLP library BIBREF21 opensourced by the Allen Institute for Artificial Intelligence AI2 Works related to the Transformers library can be generally organized along three directions at the intersection of which stands the present library The first direction includes Natural Language Processing libraries such as AllenNLP BIBREF21 SpaCy BIBREF20 flair BIBREF22 BIBREF33 or PyText These libraries precede Transformers and target somewhat different usecases for instance those with a particular focus on research for AllenNLP or a strong attention to production constrains in particular with a carefully tuned balance between speed and performance for SpaCy The previously mentioned libraries have now been provided with integrations for Transformers through a direct package dependency for AllenNLP flair or PyText or through a wrapper called spacytransformers for SpaCy The second direction concerns lowerlevel deeplearning frameworks like PyTorch BIBREF18 and TensorFlow BIBREF19 which have both been extended with model sharing capabilities or hubs respectively called TensorFlow Hub and PyTorch Hub These hubs are more general and while they offer ways to share models they differ from the present library in several ways In particular they provide neither a unified API across models nor standardized ways to access the internals of the models Targeting a more general machinelearning community these Hubs lack the NLPspecific userfacing features provided by Transformers like tokenizers dedicated processing scripts for common downstream tasks and sensible default hyperparameters for high performance on a range of language understanding and generation tasks The last direction is related to machine learning research frameworks that are specifically used to test develop and train architectures like Transformers Typical examples are the tensor2tensor library BIBREF36 fairseq BIBREF37 and MegatronLM These libraries are usually not provided with the userfacing features that allow easy download caching finetuning of the models as well as seamless transition to production We have presented the design and the main components of Transformers a library for stateoftheart NLP Its capabilities performances and unified API make it easy for both practitioners and researchers to access various largescale language models build and experiment on top of them and use them in downstream task with stateoftheart performance The library has gained significant organic traction since its original release and has become widely adopted among researchers and practitioners fostering an active community of contributors and an ecosystem of libraries building on top of the provided tools We are committed to supporting this community and making recent developments in transfer learning for NLP both accessible and easy to use while maintaining high standards of software engineering and machine learning engineering The datasets in GLUE are CoLA BIBREF54 Stanford Sentiment Treebank SST BIBREF53 Microsoft Research Paragraph Corpus MRPC BIBREF44 Semantic Textual Similarity Benchmark STS BIBREF38 Quora Question Pairs QQP BIBREF46 MultiGenre NLI MNLI BIBREF55 Question NLI QNLI BIBREF17 Recognizing Textual Entailment RTE BIBREF42 BIBREF39 BIBREF45 BIBREF40 and Winograd NLI WNLI BIBREF48 The datasets in SuperGLUE are Boolean Questions BoolQ BIBREF41 CommitmentBank CB BIBREF43 Choice of Plausible Alternatives COPA BIBREF51 MultiSentence Reading Comprehension MultiRC BIBREF47 Reading Comprehension with Commonsense Reasoning Dataset ReCoRD BIBREF56 WordinContext WiC BIBREF49 Winograd Schema Challenge WSC BIBREF52 Diverse Natural Language Inference Collection DNC BIBREF50 Recognizing Textual Entailment RTE BIBREF42 BIBREF39 BIBREF45 BIBREF40 and Winograd NLI WNLI BIBREF48 </s>
<s>[INST] A framework for streamlined statistical prediction using topic models [/INST] For the past 20 years topic models have been used as a means of dimension reduction on text data in order to ascertain underlying themes or topics from documents These probabilistic models have frequently been applied to machine learning problems such as web spam filtering BIBREF0 database sorting BIBREF1 and trend detection BIBREF2 This paper develops a methodology for incorporating topic models into traditional statistical regression frameworks such as those used in the Social Sciences and Humanities to make predictions Statistical regression is a supervised method however it should be noted the majority of topic models are themselves unsupervised When using text data for prediction we are often confronted with the problem of condensing the data into a manageable form which still retains the necessary information contained in the text Methods such as using individual words as predictors or n grams while conceptually quite simple have a tendency to be extremely computationally expensive with tens of thousands of predictors in a model Except on extremely large corpora this inevitably leads to overfitting As such methods that allow text to be summarised by a handful of semantically meaningful predictors like topic models gives a means to use large amounts of text data more effectively within a supervised predictive context This paper outlines a statistical framework for predictive topic modelling in a regression context First we discuss the implementation of a relatively simple and widely used topic model latent Dirichlet allocation LDA BIBREF3 as a preprocessing step in a regression model We then compare this model to an equivalent topic model that incorporates supervised learning supervised LDA sLDA BIBREF4 Using topic models in a predictive framework necessitates estimating topic proportions for new documents however retraining the LDA model to find these is computationally expensive Hence we derive an efficient likelihoodbased method for estimating topic proportions for previously unseen documents without the need to retrain Given these two models hold the bag of words assumption ie they assume independence between words in a document we also investigate the effect of introducing language structure to the model through the hidden Markov topic model HMTM BIBREF5 The implementation of these three topic models as a dimension reduction step for a regression model provides a framework for the implementation of further topic models dependent on the needs of the corpus and response in question The following definitions are used when considering topic models Vocabulary V a set of v unique elements generally words from which our text is composed Topic phi a probability distribution over the vocabulary That is for word i in the vocabulary a probability pi in 01 is assigned of that word appearing given the topic with sum i 1v pi 1 In general there are a fixed number k of topics phi leftlbrace phi 1phi krightrbrace Document mathbf w a collection of nj units or words from the vocabulary Depending on the topic model the order of these words within the document may or may not matter Corpus mathbf D a collection of m documents over which the topic model is applied That is mathbf D leftlbrace mathbf w1mathbf wmrightrbrace each with length nj j 12m Topic proportion theta j a distribution of topics over the document j A corpus will then have an m times k matrix theta where each row j 12m corresponds to the distribution of topics over document j Latent Dirichlet allocation LDA BIBREF3 due to its simplicity and effectiveness continues to be the basis for many topic models today When considering topic regression we take LDA as our baseline model ie we measure all subsequent models against the performance of the LDA regression model LDA is an unsupervised process that assumes both topics and topic proportions are drawn from Dirichlet distributions One reason for its simplicity is that it makes the bag of words assumption LDA assumes the process outlined in Algorithm Regression model and number of topics when generating documents l 12k generate the k topics phi l sim textrm Dirbeta j 12m let nj sim textrm Poissonxi the length of document j choose the topic proportions theta j sim textrm Diralpha i 12nj choose the topic assignment zji sim textrm Multitheta j choose a word wji sim textrm Multiphi zji create the document k0 LDA generative process Here alpha length k and beta length v are hyperparameters of the distributions of the theta j and phi l respectively When topic modelling we are generally interested in inferring topic proportions theta leftlbrace theta 1theta mrightrbrace and topics phi themselves given the corpus mathbf D That is we wish to find
Plefttheta phi mathbf D alpha beta right fracPlefttheta phi mathbf D alpha beta rightPleft mathbf D alpha beta right
The denominator Pleft mathbf D alpha beta right the probability of the corpus is understandably generally intractable to compute For the purposes of this paper we use collapsed Gibbs sampling as outlined in BIBREF6 as an approximate method for finding the LDA model given the corpus Given an LDA model on a corpus with some corresponding response variable we use the topic proportions generated as predictors in a regression model More specifically we use the topic proportions theta as the predictors as the amount of a document belonging to each topic may be indicative of its response When applying LDA as a preprocessing step to a regression model we must also bear in mind the number of topics k we choose for the LDA model While this number is assumed to be fixed in advance there are various measures for determining the number that best fits the corpus such as perplexity BIBREF3 and the log likelihood measure outlined in BIBREF6 However given we are inferring this topic model with a specific purpose in mind it would be prudent to include this information into the decision making process For that reason we choose the best number of topics k to be the number that reduces the cross validation prediction error CVPE BIBREF7 of the corresponding LDA regression model found from K fold cross validation of the model on the corpus The CVPE is here defined to be
textrm CVPEK sum limits i 1K fracmim textrm MSEi
where K is the number of folds mi is the number of documents in the i th fold and m the total number of documents in the corpus The mean square error for the i th fold denoted by textrm MSEi is defined as
textrm MSEi sum limits j in Ci frac1mi left yj hatyj right2
where hatyj is the model estimate of response yj for all documents in the set Ci the i th fold It follows that the better a model performs the smaller the MSE and thus the CVPE While we choose the best number of topics based on the information in the regression model it should be noted that LDA is still unsupervised and that the topics have not been generated with the response in mind When it comes to prediction we generally have a corpus for which we find our regression model and use this model to predict the response of new documents that are not in the original corpus Because our regression model requires us to know theta j the topic proportion for any new document j we have two options Either the topic model can be retrained with the new document added to the corpus and the regression model retrained with the new topics on the old documents or the topic proportions can be found based on the existing topic model For both efficiencys sake ie to avoid retraining the model for every prediction and for the sake of true prediction the second option is preferable Particularly in cross validation it is necessary to have a completely distinct traning and test set of data In retraining a topic model with new documents we do not have a clear distinction between the two sets BIBREF3 outline a procedure for estimating the topic proportions of a heldout document however this procedure follows a posterior approach that requires variationally inferring the posterior parameters which are then used to approximate the expected number of words belonging to each topic as an estimate for theta j We propose here a likelihoodbased approach to estimation of topic proportions of new documents by treating the problem as a case of maximum likelihood estimation That is we want to find hattheta j the estimate of theta j that maximises the likelihood of document j occurring given our existing topic model Therefore we aim to maximise
Ltheta j fmathbf wj theta j
fwj1wjnj theta j
where wj1wjnj are the words in document j As LDA is a bag of words model we are able to express this as
Ltheta j prod limits i 1nj fwji theta j
The law of total probability gives
Ltheta j prod limits i 1nj sum limits l 1k fwji zji l theta j fzji l theta j
where zji is the topic assignment for the i th word in document j However as the choice of word wji is independent of the topic proportions theta j given its topic assignment zji we can write
Ltheta j prod limits i 1nj sum limits l 1k fwji zji l fzji l theta j
The likelihood is now expressed as the products of the topic proportions and the topics themselves
Ltheta j prod limits i 1nj sum limits l 1k phi lwji theta jl
If we express the document as a set of word counts mathbf N lbrace N1Nvrbrace where Ni is the number of times the i th word of the vocabulary appears in document j then we can write the log likelihood of theta j as
ltheta j mathbf N cdot log lefttheta j phi right
In order to deal with words that appear in a new document and not the original corpus we assign a probability of 0 to any such word of appearing in any of the k topics this is equivalent to removing those words from the document To demonstrate the effectiveness of this method for estimation we generate documents for which we know the topics and topic proportions Suppose there exists a corpus comprising of two topics with a vocabulary of 500 words Given an assumed LDA model we generate 500 documents with lengths between 5000 and 10000 words Given our newly generated documents and known topics phi we are able to test the validity of the MLE process outlined above by finding the estimates hattheta j for each document j and comparing them to known topic proportions theta j Figure 1 shows the results of the MLE method for finding topic proportion estimates for documents with certain true values of theta j From these figures there is a tight clustering around the true value theta j and thus it is reasonable to assume that the MLE process for estimating the topic proportions of a new document given previously existing topics is sound This process also holds for greater numbers of topics as evidenced in Figure 2 which estimates topic proportions for a threetopic document Like with the LDA regression model we require a method for estimating the topic proportion theta j of any new documents from which we are predicting a response that does not involve retraining the entire model To do so we rely on techniques used for HMMs specifically we use a modified BaumWelch algorithm The BaumWelch algorithm is used as an approximate method to find an HMM Omega lbrace Theta phi pi rbrace given some observed sequence in this case a document However the key difference here is that our emission probabilities or topics phi are common across all documents in our corpus and thus when introducing any new documents for prediction we assume that we already know them Given the BaumWelch algorithm calculates forward and backward probabilities based on an assumed model and updates estimates iteratively we may simply take our assumed phi found from the initial HMTM as the truth and refrain from updating the emission probabilities We are generally dealing with very small probabilities in topic modelling phi generally has tens of thousands of columns the length of the vocabulary over which probabilities must sum to one While in theory this does not change how we would approach parameter estimation computationally these probabilities are frequently recognised as zero To make the process more numerically stable we implement the adapted BaumWelch algorithm demonstrated and justified in BIBREF11 While we are ultimately interested in finding topic proportions theta j for prediction the BaumWelch algorithm finds the transition matrix Theta j for some document We are able to deal with this in the same way as finding the original HMTM regression model by taking theta j to be the equilibrium probabilities of Theta j LDA is an unsupervised process which does not take into account the response variable we are predicting when inferring topics Several supervised methods have been developed to incorporate this knowledge generally for the purpose of finding better topics for the corpus in question Notably supervised LDA sLDA BIBREF4 builds on the LDA model by assuming that some response yj is generated alongside each document j 12m in the corpus based on the topics prevalent in the document When inferring the sLDA model we are therefore inclined to find topics that best suit the response and therefore the prediction problem at hand Unlike LDA we treat the topics phi as unknown constants rather than random variables That is we are interested in maximising
Pleft theta mathbf z mathbf D mathbf y phi alpha eta sigma 2 right
where eta and sigma 2 are parameters of the normally distributed response variable yj sim Neta T barzj sigma 2 where barzj 1nj sum i 1nj zji As with LDA this probability is computationally intractable and thus we require an approximation method for model inference For the purposes of this paper we use a variational expectationmaximisation EM algorithm as outlined in BIBREF4 When it comes to choosing the model with the most appropriate number of topics for the regression problem at hand we use the same method as outlined for the LDA regression model in Section Regression model and number of topics The method behind sLDA is specifically developed for prediction As such we are able to compute the expected response yj from the document mathbf wj and the model lbrace alpha phi eta sigma 2rbrace For a generalised linear model as we use in this paper this is approximated by
Eleft Yj mathbf wj alpha phi eta sigma 2 right approx Eq leftmu lefteta T barmathbf zj rightright
where mu lefteta T barmathbf zj right EleftYj zeta eta T barmathbf zj right and zeta is the natural parameter of the distribution from which the response is taken Again further detail on this method is found in BIBREF4 Topic modelling is designed as a method of dimension reduction and as such we often deal with large corpora that cannot otherwise be analysed computationally Given the complexity of human language we therefore have to choose what information about our corpus is used to develop the topic model The previous two models LDA and sLDA have relied on the bag of words assumption in order to maintain computational efficiency While for some corpora the loss of all information relating to language and document structure may not have a particularly large effect on the predictive capability of the topic model this may not hold for all prediction problems One simple way of introducing structure into the model is through a hidden Markov model HMM structure BIBREF8 BIBREF9 in fact there already exist multiple topic models which do so We look here at the hidden Markov topic model HMTM BIBREF5 which assumes that the topic assignment of a word in a document is dependent on the topic assignment of the word before it That is the topic assignments function as the latent states of the HMM with words in the document being the observations The HMTM assumes the generative process outlined in Algorithm HMTM regression model for documents in a corpus h l 12k generate topics phi l sim textrm Dirbeta j 12m generate starting probabilities pi j sim textrm Diralpha l 12k generate the l th row of the transition matrix Theta j Theta jl sim textrm Dirgamma l choose the topic assignment for the first word zj1 sim textrm Multipi j select a word from the vocabulary wj1 sim textrm Multiphi zj1 phi l sim textrm Dirbeta 0 choose the topic assignment phi l sim textrm Dirbeta 1 based on transition matrix phi l sim textrm Dirbeta 2 select a word from the vocabulary phi l sim textrm Dirbeta 3 create the document phi l sim textrm Dirbeta 4 HMTM generative process Here alpha beta and gamma leftlbrace gamma 1gamma k rightrbrace are Dirichlet priors of the starting probabilities topics and transition probabilities respectively When it comes to prediction we are able to use the transition matrices for each document Theta j as predictors but to keep consistency with the previous models we take the equilibrium distributions of the matrices as the topic proportions theta j That is we find theta j such that
theta j Theta j theta j quad textrm and quad theta j mathbf e 1
This also fits with the concept of topic models as a form of dimension reduction allowing k1 variables as opposed to kk1 when using the transition matrix Theta j As models are often fit using hundreds of topics BIBREF10 BIBREF6 this makes models faster to compute We choose the number of topics k here with the same method outlined in Section Regression model and number of topics To demonstrate the use of topic models in a regression framework we apply them to a problem involving online advertisements Specifically we have a corpus containing 4151 advertisements taken from the trading website Gumtree pertaining to the sale of cats in Australia and handlabelled by an expert Of these advertisements 2187 correspond to relinquished cats and 1964 to nonrelinquished We train a model to predict relinquished status from the text of an advertisement using a topic regression model A cat is considered to be relinquished if it is being given up by its owner after a period of time as opposed to cats that are sold either by breeders or former owners In order to improve efficiency and model quality we first clean our text data Details on the cleaning steps can be found in Appendix Text cleaning Before investigating regression models that use topic proportions as predictors it is worth developing a gold standard model ie a model whose predictive capability we aim to match with our topic regression models Because the problem here involves a relatively small corpus advertisements with a median word count of 35 we are able to compare our topic regression models to a model that uses individual words as its predictors In a much larger corpus this kind of prediction would be cumbersome to compute hence our reliance on topic models and other dimension reduction techniques Because we are predicting a categorical binary variable we use logistic regression Rather than using all words in the corpus as this would drastically overfit the model we use a stepup algorithm based on the Akaike information criterion AIC BIBREF12 to choose the most significant words for the model without overfitting Instead of applying the stepup process to the entire vocabulary of exactly 13000 words we apply it to the 214 most common words ie words that appear in at least 25 of the documents in the corpus The chosen model uses 97 predictors with coefficients appearing consistent with what you would expect from the problem for example the word kitten is indicative of nonrelinquished advertisements while cat is the opposite which is expected as younger cats are less likely to be relinquished To assess the predictive capability of this and other models we require some method by which we can compare the models For that purpose we use receiver operating characteristic ROC curves as a visual representation of predictive effectiveness ROC curves compare the true positive rate TPR and false positive rate FPR of a models predictions at different threshold levels The area under the curve AUC between 0 and 1 is a numerical measure where the higher the AUC is the better the model performs We crossvalidate our model by first randomly splitting the corpus into a training set 95 of the corpus and test set 5 of the corpus We then fit the model to the training set and use it to predict the response of the documents in the test set We repeat this process 100 times The thresholdaveraged ROC curve BIBREF13 is found from these predictions and shown in Figure 3 Table 1 shows the AUC for each model considered As with the Gumtree dataset we first construct a word count model against which we can measure the performance of our topic regression models Once again this can be done because we are working with a small corpus otherwise we would generally consider this approach to be computationally too heavy As we have a categorical nonbinary response variable storyline with 10 levels we use a multinomial logistic regression model We again use a stepup process with AIC as the measure to determine which words in our vocabulary to use as predictors in our model As our vocabulary consists of only 1607 unique words we consider all of them in our stepup process After applying this process the model with three predictors minister night and around is chosen We are no longer able to easily apply ROC curves as a measure of performance to this problem as we are dealing with a nonbinary response We instead use a Brier score BIBREF14 a measure for comparing the predictive performance of models with categorical responses The Brier score is
textrm BS frac1m sum limits j1m sum limits i1s left hatyji oji right2
where hatyji is the probability of document j belonging to storyline i and oji 1 if document j belongs to storyline i and 0 otherwise for document j 12m and storyline i 12s Each term in the sum goes to zero the closer the model gets to perfect prediction and as such our aim is to minimise the Brier score in choosing a model For each document in the corpus we find the probabilities of each outcome by using the remaining 78 documents or training dataset as the corpus in a multinomial logistic regression model with the same three predictors as found above Due to the fact that the training dataset here is smaller than the Gumtree dataset we perform leaveoneout cross validation on each document in the corpus rather than using a 955 split We then predict the outcome based on the words found in the leftout document or test dataset and repeat for all 79 scenes However due to the short length of some scenes and the fact that unique words must be thrown out we restrict the testing to 57 of the 79 scenes the remaining scenes do not generate a numerically stable approximation for theta j for the HMTM regression model The Brier score calculated using this method for the stepup word count model is 08255 Using the method outlined in Section Regression model and number of topics we choose the LDA regression model with 26 topics as the best for this problem Inspection of the top words included in these 26 topics shows individual topics associated with different breeds eg persian manx as well as urgency of selling eg urgent asap suggesting that the model is not overfit to the data We generate a thresholdaveraged ROC curve using the same cross validation method as earlier yielding an area under the curve AUC of 08913 The curve can be seen in Figure 3 While not as high as the AUC for the word count model the LDA regression model is significantly more efficient taking only 3 of the time to calculate We can compare this result to that of an sLDA regression model The model chosen for this problem has two topics giving a thresholdaveraged ROC curve under cross validation with an AUC of 08588 It is surprising that the LDA regression model should outperform sLDA as sLDA incorporates the response variable when finding the most appropriate topics However this can be attributed to the number of topics in the model the sLDA regression model with 26 topics outperforms the LDA model with an AUC of 09030 The word count model still outperforms the sLDA model however once again the topic regression model is significantly more efficient taking only 06 of the time to calculate Further details on the models and their calculation can be found in Appendix Topic model inference For the LDA regression model for this problem we determine the best number of topics k to be 16 As with the word count model we use the Brier score to evaluate the performance of this model compared to others in the chapter We again use the leaveoneout cross validation approach to predict the probabilities of a scene belonging to each storyline The Brier score found for the LDA regression model is 16351 While this is higher and therefore worse than the Brier score for the word count model above this is not unexpected and we are more interested in seeing how the LDA model fares against other topic models We compare these results to the HMTM regression model as outlined in Section HMTM regression model We choose the model with 12 topics according to the CVPE The Brier score calculated from 57 scenes for the HMTM regression model is 15749 While still not up to the standard of the word count model at 08255 this appears to be a slight improvement on the LDA model meaning that dropping the bag of words assumption may in fact improve the predictive performance of the model However it should be kept in mind that the LDA model is better at handling short documents It would be worth applying these models to corpora with longer documents in future to see how they compare Further details on the computation of these models can be found in Appendix Topic model inference One of the motivating ideas behind having topic dependencies between consecutive words as in the HMTM model is that some documents will have a predisposition to stay in the same topic for a long sequence such as a sentence or a paragraph This argument particularly applies to narrativedriven corpora such as the Love Actually corpus To that end we may adapt the HMTM described above so that the model favours long sequences of the same topic by adjusting the Dirichlet priors of the transition probabilities gamma lbrace gamma 1gamma k rbrace to favour ondiagonal elements By specifying these priors to be
gamma ls leftlbrace beginarrayll