text
stringlengths
0
164k
nonumber sum t w in cal Dt
log ptheta lefttmid w ell t right Eq 20 Crucially our crosslingual objective forces both taggers to share part of the parameter vector theta which allows it to represent morphological regularities between the two languages in a common embedding space and thus enables transfer of knowledge This is no different from monolingual multitask settings eg jointly training a chunker and a tagger for the transfer of syntactic information BIBREF9 We point out that in contrast to our approach almost all multitask transfer learning eg for dependency parsing BIBREF20 has shared wordlevel embeddings rather than characterlevel embeddings See secrelatedwork for a more complete discussion We consider two parameterizations of this distribution ptheta ti
mid w ell First we modify the initial characterlevel LSTM embedding such that it also encodes the identity of the language Second we modify the softmax layer creating a languagespecific softmax Our first architecture has one softmax as in eqtagger over all morphological tags in cal T shared among all the languages To allow the architecture to encode morphological features specific to one language eg the third person present plural ending in Spanish is an but o in Portuguese we modify the creation of the characterlevel embeddings Specifically we augment the character alphabet Sigma with a distinguished symbol that indicates the language texttt idell We then pre and postpend this symbol to the character stream for every word before feeding the characters into the bidirectional LSTM Thus we arrive at the new languagespecific word embeddings vell i left textLSTMleftlangle texttt idell ci1 ldots
ciMi texttt idell rangle right right
left textLSTM leftlangle texttt idell ciMi ldots ci1 texttt idell rangle right right nonumber Eq 22 This model creates a languagespecific embedding vector vi but the individual embeddings for a given character are shared among the languages jointly trained on The remainder of the architecture is held constant Next inspired by the architecture of heigold2013multilingual we consider a languagespecific softmax layer ie we define a new output layer for every language ptheta leftti mid w ell right textsoftmaxleftWell ei bell right Eq 24 where Well in mathbb Rcal T times n and bell in mathbb Rcal T are now languagespecific In this architecture the embeddings ei are the same for all languagesthe model has to learn languagespecific behavior exclusively through the output softmax of the tagging LSTM The third model we exhibit is a joint architecture for tagging and language identification We consider the following loss function cal Ltextit joint theta sum t w in cal Ds log ptheta ell s tmid w 5 nonumber sum t w in cal Dt log ptheta leftell t tmid wright Eq 26 where we factor the joint distribution as ptheta leftell tmid wright ptheta leftell mid wright cdot ptheta lefttmid w ell right Eq 27 Just as before we define ptheta lefttmid w ell right above as in eqlangspecific and we define ptheta ell mid w textsoftmaxleftUtanh Veiright Eq 28 which is a multilayer perceptron with a binary softmax over the two languages as an output layer we have added the additional parameters V in mathbb R2 times n and U in mathbb R2 times 2 In the case of multisource transfer this is a softmax over the set of languages The first two architectures discussed in pararch1 represent two possibilities for a multitask objective where we condition on the language of the sentence The first integrates this knowledge at a lower level and the second at a higher level The third architecture discussed in secjointarch takes a different tackrather than conditioning on the language it predicts it The joint model offers one interesting advantage over the two architectures proposed Namely it allows us to perform a morphological analysis on a sentence where the language is unknown This effectively alleviates an early step in the NLP pipeline where language id is performed and is useful in conditions where the language to be tagged may not be known apriori eg when tagging social media data While there are certainly more complex architectures one could engineer for the task we believe we have found a relatively diverse sampling enabling an interesting experimental comparison Indeed it is an important empirical question which architectures are most appropriate for transfer learning Since transfer learning affords the opportunity to reduce the sample complexity of the datahungry neural networks that currently dominate NLP research finding a good solution for crosslingual transfer in stateoftheart neural models will likely be a boon for lowresource NLP in general Empirically we ask three questions of our architectures i How well can we transfer morphological tagging models from highresource languages to lowresource languages in each architecture Does one of the three outperform the others ii How many annotated data in the lowresource language do we need iii How closely related do the languages need to be to get good transfer We experiment with the language families Romance IndoEuropean Northern Germanic IndoEuropean Slavic IndoEuropean and Uralic In the Romance subgrouping of the wider IndoEuropean family we experiment on Catalan ca French fr Italian it Portuguese pt Romanian ro and Spanish es In the Northern Germanic family we experiment on Danish da Norwegian no and Swedish sv In the Slavic family we experiment on Bulgarian bg Czech bg Polish pl Russian ru Slovak sk and Ukrainian uk Finally in the Uralic family we experiment on Estonian et Finnish fi and Hungarian hu We use the morphological tagging datasets provided by the Universal Dependencies UD treebanks the concatenation of the 4textth and 6textth columns of the file format BIBREF13 We list the size of the training development and test splits of the UD treebanks we used in tablangsize Also we list the number of unique morphological tags in each language in tabnumtags which serves as an approximate measure of the morphological complexity each language exhibits Crucially the data are annotated in a crosslinguistically consistent manner such that words in the different languages that have the same syntactosemantic function have the same bundle of tags see secmorphotagging for a discussion Potentially further gains would be possible by using a more universal scheme eg the UniMorph scheme We consider two baselines in our work First we consider the MarMoT tagger BIBREF17 which is currently the best performing nonneural model The source code for MarMoT is freely available online which allows us to perform fully controlled experiments with this model Second we consider the alignmentbased projection approach of buysbotha2016P161 We discuss each of the two baselines in turn The MarMoT tagger is the leading nonneural approach to morphological tagging This baseline is important since nonneural featurebased approaches have been found empirically to be more efficient in the sense that their learning curves tend to be steeper Thus in the lowresource setting we would be remiss to not consider a featurebased approach Note that this is not a transfer approach but rather only uses the lowresource data The projection approach of buysbotha2016P161 provides an alternative method for transfer learning The idea is to construct pseudoannotations for bitext given an alignments BIBREF21 Then one trains a standard tagger using the projected annotations The specific tagger employed is the wsabie model of DBLPconfijcaiWestonBU11 whichlike our approach is a 0textth order discriminative neural model In contrast to ours however their network is shallow We compare the two methods in more detail in secrelatedwork Additionally we perform a thorough study of the neural transfer learner considering all three architectures A primary goal of our experiments is to determine which of our three proposed neural transfer techniques is superior Even though our experiments focus on morphological tagging these architectures are more general in that they may be easily applied to other tasks eg parsing or machine translation We additionally explore the viability of multisource transfer ie the case where we have multiple source languages All of our architectures generalize to the multisource case without any complications We train our models with the following conditions We evaluate using average per token accuracy as is standard for both POS tagging and morphological tagging and per feature F1 as employed in buysbotha2016P161 The per feature F1 calculates a key Fk1 for each key in the target languages tags by asking if the keyattribute pair ki vi is in the predicted tag Then the keyspecific Fk1 values are averaged equally Note that F1 is a more flexible metric as it gives partial credit for getting some of the attributes in the bundle correct where accuracy does not authorRyancolorpurple40sizefancylinecaptionGeorg needs to check Taken from httpwwwdfkide neumannpublicationsnewpsBigNLP2016pdf Our networks are four layers deep two LSTM layers for the character embedder ie to compute vi and two LSTM layers for the tagger ie to compute ei and we use an embedding size of 128 for the character input vector size and hidden layers of 256 nodes in all other cases All networks are trained with the stochastic gradient method RMSProp BIBREF22 with a fixed initial learning rate and a learning rate decay that is adjusted for the other languages according to the amount of training data The batch size is always 16 Furthermore we use dropout BIBREF23 The dropout probability is set to 02 We used Torch 7 BIBREF24 to configure the computation graphs implementing the network architectures authorRyancolorpurple40sizefancylinecaptionNeeds to be updated We report our results in two tables First we report a detailed crosslingual evaluation in tabresults Secondly we report a comparison against two baselines in tabbaselinetable1 accuracy and tabbaselinetable2 F1 We see two general trends of the data First we find that genetically closer languages yield better source languages Second we find that the multisoftmax architecture is the best in terms of transfer ability as evinced by the results in tabresults We find a wider gap between our model and the baselines under the accuracy than under F1 We attribute this to the fact that F1 is a softer metric in that it assigns credit to partially correct guesses We divide the discussion of related work topically into three parts for ease of intellectual digestion Most crosslingual work in NLPfocusing on morphology or otherwisehas concentrated on indirect supervision rather than transfer learning The goal in such a regime is to provide noisy labels for training the tagger in the lowresource language through annotations projected over aligned bitext with a highresource language This method of projection was first introduced by DBLPconfnaaclYarowskyN01 for the projection of POS annotation While followup work BIBREF26 BIBREF27 BIBREF28 has continually demonstrated the efficacy of projecting simple partofspeech annotations buysbotha2016P161 were the first to show the use of bitextbased projection for the training of a morphological tagger for lowresource languages As we also discuss the training of a morphological tagger our work is most closely related to buysbotha2016P161 in terms of the task itself We contrast the approaches The main difference lies therein that our approach is not projectionbased and thus does not require the construction of a bilingual lexicon for projection based on bitext Rather our method jointly learns multiple taggers and forces them to share featuresa true transfer learning scenario In contrast to projectionbased methods our procedure always requires a minimal amount of annotated data in the lowresource target languagein practice however this distinction is noncritical as projectionbased methods without a small mount of seed target language data perform poorly BIBREF29 Our work also follows a recent trend in NLP whereby traditional wordlevel neural representations are being replaced by characterlevel representations for a myriad tasks eg POS tagging DBLPconficmlSantosZ14 parsing BIBREF30 language modeling BIBREF31 sentiment analysis BIBREF32 as well as the tagger of heigold2017 whose work we build upon Our work is also related to recent work on characterlevel morphological generation using neural architectures BIBREF33 BIBREF34 In terms of methodology however our proposal bears similarity to recent work in speech and machine translationwe discuss each in turn In speech recognition heigold2013multilingual train a crosslingual neural acoustic model on five Romance languages The architecture bears similarity to our multilanguage softmax approach Dependency parsing benefits from crosslingual learning in a similar fashion BIBREF35 BIBREF20 In neural machine translation BIBREF36 BIBREF37 recent work BIBREF38 BIBREF39 BIBREF40 has explored the possibility of jointly train translation models for a wide variety of languages Our work addresses a different task but the undergirding philosophical motivation is similar ie attack lowresource NLP through multitask transfer learning kanncotterellschutze2017ACL2017 offer a similar method for crosslingual transfer in morphological inflection generation We have presented three characterlevel recurrent neural network architectures for multitask crosslingual transfer of morphological taggers We provided an empirical evaluation of the technique on 18 languages from four different language families showing widespread applicability of the method We found that the transfer of morphological taggers is an eminently viable endeavor among related language and in general the closer the languages the easier the transfer of morphology becomes Our technique outperforms two strong baselines proposed in previous work Moreover we define standard lowresource training splits in UD for future research in lowresource morphological tagging Future work should focus on extending the neural morphological tagger to a joint lemmatizer BIBREF41 and evaluate its functionality in the lowresource setting RC acknowledges the support of an NDSEG fellowship Also we would like to thank Jan Buys and Jan Botha who helped us compare to the numbers reported in their paper We would also like to thank Hinrich Schtze for reading an early draft and Tim Vieira and Jason Naradowsky for helpful initial discussions </s>
<s>[INST] Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping [/INST] Relation extraction RE is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons organizations geopolitical entities locations and events It provides useful information for many NLP applications such as knowledge base construction text mining and question answering For example the entity Washington DC and the entity United States have a CapitalOf relationship and extraction of such relationships can help answer questions like What is the capital city of the United States Traditional RE models eg BIBREF0 BIBREF1 BIBREF2 require careful feature engineering to derive and combine various lexical syntactic and semantic features Recently neural network RE models eg BIBREF3 BIBREF4 BIBREF5 BIBREF6 have become very successful These models employ a certain level of automatic feature learning by using word embeddings which significantly simplifies the feature engineering task while considerably improving the accuracy achieving the stateoftheart performance for relation extraction All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy However annotating RE data by human is expensive and timeconsuming and can be quite difficult for a new language Moreover most RE models require languagespecific resources such as dependency parsers and partofspeech POS taggers which also makes it very challenging to transfer an RE model of a resourcerich language to a resourcepoor language There are a few existing weakly supervised crosslingual RE approaches that require no human annotation in the target languages eg BIBREF7 BIBREF8 BIBREF9 BIBREF10 However the existing approaches require aligned parallel corpora or machine translation systems which may not be readily available in practice In this paper we make the following contributions to crosslingual RE We propose a new approach for direct crosslingual RE model transfer based on bilingual word embedding mapping It projects word embeddings from a target language to a source language eg English so that a welltrained sourcelanguage RE model can be directly applied to the target language with no manually annotated RE data needed for the target language We design a deep neural network architecture for the sourcelanguage English RE model that uses word embeddings and generic languageindependent features as the input The English RE model achieves thestateoftheart performance without using languagespecific resources We conduct extensive experiments which show that the proposed approach achieves very good performance up to 79 of the accuracy of the supervised targetlanguage RE model for a number of target languages on both inhouse and the ACE05 datasets BIBREF11 using a small bilingual dictionary with only 1K word pairs To the best of our knowledge this is the first work that includes empirical studies for crosslingual RE on several languages across a variety of language families without using aligned parallel corpora or machine translation systems We organize the paper as follows In Section 2 we provide an overview of our approach In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages In Section 4 we present a neural network architecture for the sourcelanguage English In Section 5 we evaluate the performance of the proposed approach for a number of target languages We discuss related work in Section 6 and conclude the paper in Section 7 We summarize the main steps of our neural crosslingual RE model transfer approach as follows Build word embeddings for the source language and the target language separately using monolingual data Learn a linear mapping that projects the targetlanguage word embeddings into the sourcelanguage embedding space using a small bilingual dictionary Build a neural network sourcelanguage RE model that uses word embeddings and generic languageindependent features as the input For a targetlanguage sentence and any two entities in it project the word embeddings of the words in the sentence to the sourcelanguage word embeddings using the linear mapping and then apply the sourcelanguage RE model on the projected word embeddings to classify the relationship between the two entities An example is shown in Figure FIGREF4 where the target language is Portuguese and the source language is English We will describe each component of our approach in the subsequent sections In recent years vector representations of words known as word embeddings become ubiquitous for many NLP applications BIBREF12 BIBREF13 BIBREF14 A monolingual word embedding model maps words in the vocabulary mathcal V of a language to realvalued vectors in mathbb Rdtimes 1 The dimension of the vector space d is normally much smaller than the size of the vocabulary Vmathcal V for efficient representation It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data Crosslingual word embedding models try to build word embeddings across multiple languages BIBREF15 BIBREF16 One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17 BIBREF18 Another approach builds multilingual word embeddings in a shared vector space simultaneously by generating mixed language corpora using aligned sentences BIBREF19 BIBREF20 In this paper we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs and does not require parallel corpora of aligned sentences which could be more difficult to obtain To build monolingual word embeddings for the source and target languages we use a variant of the Continuous BagofWords CBOW word2vec model BIBREF13 The standard CBOW model has two matrices the input word matrix tildemathbf X in mathbb Rdtimes V and the output word matrix mathbf X in mathbb Rdtimes V For the ith word wi in mathcal V let mathbf ewi in mathbb RV times 1 be a onehot vector with 1 at index i and 0s at other indexes so that tildemathbf xi tildemathbf Xmathbf ewi the ith column of tildemathbf X is the input vector representation of word wi and mathbf xi mathbf Xmathbf ewi the ith column of mathbf X is the output vector representation ie word embedding of word wi Given a sequence of training words w1 w2 wN the CBOW model seeks to predict a target word wt using a window of 2c context words surrounding wt by maximizing the following objective function The conditional probability is calculated using a softmax function where mathbf xtmathbf Xmathbf ewt is the output vector representation of word wt and is the sum of the input vector representations of the context words In our variant of the CBOW model we use a separate input word matrix tildemathbf Xj for a context word at position j c le j le c jne 0 In addition we employ weights that decay with the distances of the context words to the target word Under these modifications we have We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21 BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements and suggested to learn a linear mapping between the vector spaces Let mathcal D be a bilingual dictionary with aligned word pairs wi vii1D between a source language s and a target language t where wi is a sourcelanguage word and vi is the translation of wi in the target language Let mathbf xi in mathbb Rd times 1 be the word embedding of the sourcelanguage word wi mathbf yi in mathbb Rd times 1 be the word embedding of the targetlanguage word vi We find a linear mapping matrix mathbf Mtrightarrow s such that mathbf Mtrightarrow smathbf yi approximates mathbf xi by solving the following least squares problem using the dictionary as the training set Using mathbf Mtrightarrow s for any targetlanguage word v with word embedding mathbf y we can project it into the sourcelanguage embedding space as mathbf Mtrightarrow smathbf y To ensure that all the training instances in the dictionary mathcal D contribute equally to the optimization objective in DISPLAYFORM14 and to preserve vector norms after projection we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22 BIBREF23 BIBREF24 First we normalize the sourcelanguage and targetlanguage word embeddings to be unit vectors mathbf xprime fracmathbf xmathbf x for each sourcelanguage word embedding mathbf x and mathbf yprime fracmathbf ymathbf y for each targetlanguage word embedding mathbf y Next we add an orthogonality constraint to DISPLAYFORM14 such that mathbf M is an orthogonal matrix ie mathbf Mmathrm Tmathbf M mathbf I where mathbf I denotes the identity matrix mathbf MO trightarrow s can be computed using singularvalue decomposition SVD The mapping learned in DISPLAYFORM14 or DISPLAYFORM16 requires a seed dictionary To relax this requirement BIBREF25 proposed a selflearning procedure that can be combined with a dictionarybased mapping technique Starting with a small seed dictionary the procedure iteratively 1 learns a mapping using the current dictionary and 2 computes a new dictionary using the learned mapping BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages and then applies a robust selflearning procedure to iteratively improve the mapping Another unsupervised method based on adversarial training was proposed in BIBREF27 We compare the performance of different mappings for crosslingual RE model transfer in Section SECREF45 For any two entities in a sentence an RE model determines whether these two entities have a relationship and if yes classifies the relationship into one of the predefined relation types We focus on neural network RE models since these models achieve the stateoftheart performance for relation extraction Most importantly neural network RE models use word embeddings as the input which are amenable to crosslingual model transfer via crosslingual word embeddings In this paper we use English as the source language Our neural network architecture has four layers The first layer is the embedding layer which maps input words in a sentence to word embeddings The second layer is a context layer which transforms the word embeddings to contextaware vector representations using a recurrent or convolutional neural network layer The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling The final layer is the output layer which returns the classification label for the relation type For an English sentence with n words mathbf sw1w2wn the embedding layer maps each word wt to a realvalued vector word embedding mathbf xtin mathbb Rd times 1 using the English word embedding model Section SECREF9 In addition for each entity m in the sentence the embedding layer maps its entity type to a realvalued vector entity label embedding mathbf lm in mathbb Rdm times 1 initialized randomly In our experiments we use d300 and dm 50 Given the word embeddings mathbf xts of the words in the sentence the context layer tries to build a sentencecontextaware vector representation for each word We consider two types of neural network layers that aim to achieve this The first type of context layer is based on Long ShortTerm Memory LSTM type recurrent neural networks BIBREF28 BIBREF29 Recurrent neural networks RNNs are a class of neural networks that operate on sequential data such as sequences of words LSTM networks are a type of RNNs that have been invented to better capture longrange dependencies in sequential data We pass the word embeddings mathbf xts to a forward and a backward LSTM layer A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks The memory block at the tth word in the forward LSTM layer contains a memory cell overrightarrowmathbf ct and three gates an input gate overrightarrowmathbf it a forget gate overrightarrowmathbf ft and an output gate overrightarrowmathbf ot overrightarrowcdot indicates the forward direction which are updated as follows where sigma is the elementwise sigmoid function and odot is the elementwise multiplication The hidden state vector overrightarrowmathbf ht in the forward LSTM layer incorporates information from the left past tokens of wt in the sentence Similarly we can compute the hidden state vector overleftarrowmathbf ht in the backward LSTM layer which incorporates information from the right future tokens of wt in the sentence The concatenation of the two vectors mathbf ht overrightarrowmathbf ht overleftarrowmathbf ht is a good representation of the word wt with both left and right contextual information in the sentence The second type of context layer is based on Convolutional Neural Networks CNNs BIBREF3 BIBREF4 which applies convolutionlike operation on successive windows of size k around each word in the sentence Let mathbf zt mathbf xtk12mathbf xtk12 be the concatenation of k word embeddings around wt The convolutional layer computes a hidden state vector for each word wt where mathbf W is a weight matrix and mathbf b is a bias vector and tanh cdot is the elementwise hyperbolic tangent function After the context layer the sentence w1w2wn is represented by mathbf h1mathbf hn Suppose m1wb1we1 and m2wb2we2 are two entities in the sentence where m1 is on the left of m2 ie e1 b2 As different sentences and entities may have various lengths the summarization layer tries to build a fixedlength vector that best summarizes the representations of the sentence and the two entities for relation type classification We divide the hidden state vectors mathbf hts into 5 groups G1lbrace mathbf h1mathbf hb11rbrace includes vectors that are left to the first entity m1 G2lbrace mathbf hb1mathbf he1rbrace includes vectors that are in the first entity m1 G3lbrace mathbf he11mathbf hb21rbrace includes vectors that are between the two entities G4lbrace mathbf hb2mathbf he2rbrace includes vectors that are in the second entity m2 G5lbrace mathbf he21mathbf hnrbrace includes vectors that are right to the second entity m2 We perform elementwise max pooling among the vectors in each group where dh is the dimension of the hidden state vectors Concatenating the mathbf hGis we get a fixedlength vector mathbf hsmathbf hG1mathbf hG5 The output layer receives inputs from the previous layers the summarization vector mathbf hs the entity label embeddings mathbf lm1 and mathbf lm2 for the two entities under consideration and returns a probability distribution over the relation type labels Given the word embeddings of a sequence of words in a target language t mathbf y1mathbf yn we project them into the English embedding space by applying the linear mapping mathbf Mtrightarrow s learned in Section SECREF13 mathbf Mtrightarrow smathbf y1 mathbf Mtrightarrow smathbf y2mathbf Mtrightarrow smathbf yn The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings which are language independent to perform relationship classification Note that our models do not use languagespecific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language Also our models do not use precise word position features since word positions in sentences can vary a lot across languages In this section we evaluate the performance of the proposed crosslingual RE approach on both inhouse dataset and the ACE Automatic Content Extraction 2005 multilingual dataset BIBREF11 Our inhouse dataset includes manually annotated RE data for 6 languages English German Spanish Italian Japanese and Portuguese It defines 56 entity types eg Person Organization GeoPolitical Entity Location Facility Time EventViolence etc and 53 relation types between the entities eg AgentOf LocatedAt PartOf TimeOf AffectedBy etc The ACE05 dataset includes manually annotated RE data for 3 languages English Arabic and Chinese It defines 7 entity types Person Organization GeoPolitical Entity Location Facility Weapon Vehicle and 6 relation types between the entities AgentArtifact GeneralAffiliation ORGAffiliation PartWhole PersonalSocial Physical For both datasets we create a class label O to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest We build 3 neural network English RE models under the architecture described in Section SECREF4 The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer We call it PassThrough for short The second neural network RE model has a BiLSTM context layer We call it BiLSTM for short The third neural network model has a CNN context layer with a window size 3 We call it CNN for short First we compare our neural network English RE models with the stateoftheart RE models on the ACE05 English data The ACE05 English data can be divided to 6 different domains broadcast conversation bc broadcast news bn telephone conversation cts newswire nw usenet un and webblogs wl We apply the same data split in BIBREF31 BIBREF30 BIBREF6 which uses news the union of bn and nw as the training set a half of bc as the development set and the remaining data as the test set We learn the model parameters using Adam BIBREF32 We apply dropout BIBREF33 to the hidden layers to reduce overfitting The development set is used for tuning the model hyperparameters and for early stopping In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6 Our BiLSTM model outperforms the best model single or ensemble in BIBREF30 and the best single model in BIBREF6 without using any languagespecific resources such as dependency parsers While the data split in the previous works was motivated by domain adaptation the focus of this paper is on crosslingual model transfer and hence we apply a random data split as follows For the source language English and each target language we randomly select 80 of the data as the training set 10 as the development set and keep the remaining 10 as the test set The sizes of the sets are summarized in Table TABREF41 We report the Precision Recall and F1 score of the 3 neural network English RE models in Table TABREF42 Note that adding an additional context layer with either BiLSTM or CNN significantly improves the performance of our English RE model compared with the simple PassThrough model Therefore we will focus on the BiLSTM model and the CNN model in the subsequent experiments We apply the English RE models to the 7 target languages across a variety of language families The bilingual dictionary includes the most frequent targetlanguage words and their translations in English To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for crosslingual RE we first evaluate the performance F1 score of our crosslingual RE approach on the targetlanguage development sets with an increasing dictionary size as plotted in Figure FIGREF35 We found that for most target languages once the dictionary size reaches 1K further increasing the dictionary size may not improve the transfer performance Therefore we select the dictionary size to be 1K We compare the performance of crosslingual RE model transfer under the following bilingual word embedding mappings Regular1K the regular mapping learned in DISPLAYFORM14 using 1K word pairs Orthogonal1K the orthogonal mapping with length normalization learned in DISPLAYFORM16 using 1K word pairs in this case we train the English RE models with the normalized English word embeddings SemiSupervised1K the mapping learned with 1K word pairs and improved by the selflearning method in BIBREF25 Unsupervised the mapping learned by the unsupervised method in BIBREF26 The results are summarized in Table TABREF46 The regular mapping outperforms the orthogonal mapping consistently across the target languages While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22 BIBREF23 BIBREF24 our crosslingual RE approach directly maps targetlanguage word embeddings to the English embedding space without conducting word translations Moreover the orthogonal mapping requires length normalization but we observed that length normalization adversely affects the performance of the English RE models about 20 F1 points drop We apply the vecmap toolkit to obtain the semisupervised and unsupervised mappings The unsupervised mapping has the lowest average accuracy over the target languages but it does not require a seed dictionary Among all the mappings the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs and hence we adopt it for the crosslingual RE task The crosslingual RE model transfer results for the inhouse test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53 using the regular mapping learned with a bilingual dictionary of size 1K In the tables we also provide the performance of the supervised RE model BiLSTM for each target language which is trained with a few hundred thousand tokens of manually annotated RE data in the targetlanguage and may serve as an upper bound for the crosslingual model transfer performance Among the 2 neural network models the BiLSTM model achieves a better crosslingual RE performance than the CNN model for 6 out of the 7 target languages In terms of absolute performance the BiLSTM model achieves over 400 F1 scores for German Spanish Portuguese and Chinese In terms of relative performance it reaches over 75 of the accuracy of the supervised targetlanguage RE model for German Spanish Italian and Portuguese While Japanese and Arabic appear to be more difficult to transfer it still achieves 55 and 52 of the accuracy of the supervised Japanese and Arabic RE model respectively without using any manually annotated RE data in JapaneseArabic We apply model ensemble to further improve the accuracy of the BiLSTM model We train 5 BiLSTM English RE models initiated with different random seeds apply the 5 models on the target languages and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models This Ensemble approach improves the single model by 0619 F1 points except for Arabic Since our approach projects the targetlanguage word embeddings to the sourcelanguage embedding space preserving the word order it is expected to work better for a target language that has more similar word order as the source language This has been verified by our experiments The source language English belongs to the SVO Subject Verb Object language family where in a sentence the subject comes first the verb second and the object third Spanish Italian Portuguese German in conventional typology and Chinese also belong to the SVO language family and our approach achieves over 70 relative accuracy for these languages On the other hand Japanese belongs to the SOV Subject Object Verb language family and Arabic belongs to the VSO Verb Subject Object language family and our approach achieves lower relative accuracy for these two languages There are a few weakly supervised crosslingual RE approaches BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora BIBREF9 translates a targetlanguage sentence into English performs RE in English and then projects the relation phrases back to the targetlanguage sentence BIBREF10 proposes an adversarial feature adaptation approach for crosslingual relation classification which uses a machine translation system to translate sourcelanguage sentences into targetlanguage sentences Unlike the existing approaches our approach does not require aligned parallel corpora or machine translation systems There are also several multilingual RE approaches eg BIBREF34 BIBREF35 BIBREF36 where the focus is to improve monolingual RE by jointly modeling texts in multiple languages Many crosslingual word embedding models have been developed recently BIBREF15 BIBREF16 An important application of crosslingual word embeddings is to enable crosslingual model transfer In this paper we apply the bilingual word embedding mapping technique in BIBREF17 to crosslingual RE model transfer Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37 POS tagging BIBREF38 and named entity recognition BIBREF21 BIBREF39 In this paper we developed a simple yet effective neural crosslingual RE model transfer approach which has very low resource requirements a small bilingual dictionary with 1K word pairs and can be easily extended to a new language Extensive experiments for 7 target languages across a variety of language families on both inhouse and open datasets show that the proposed approach achieves very good performance up to 79 of the accuracy of the supervised targetlanguage RE model which provides a strong baseline for building crosslingual RE models with minimal resources We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments </s>
<s>[INST] Visual Natural Language Query Auto-Completion for Estimating Instance Probabilities [/INST] This work focuses on the problem of finding objects in an image based on natural language descriptions Existing solutions take into account both the image and the query BIBREF0 BIBREF1 BIBREF2 In our problem formulation rather than having the entire text we are given only a prefix of the text which requires completing the text based on a language model and the image and finding a relevant object in the image We decompose the problem into three components i completing the query from text prefix and an image ii estimating probabilities of objects based on the completed text and iii segmenting and classifying all instances in the image We combine extend and modify state of the art components i we extend a FactorCell LSTM BIBREF3 BIBREF4 which conditionally completes text to complete a query from both a text prefix and an image ii we fine tune a BERT embedding to compute instance probabilities from a complete sentence and iii we use MaskRCNN BIBREF5 for instance segmentation Recent natural language embeddings BIBREF6 have been trained with the objectives of predicting masked words and determining whether sentences follow each other and are efficiently used across a dozen of natural language processing tasks Sequence models have been conditioned to complete text from a prefix and index BIBREF3 however have not been extended to take into account an image Deep neural networks have been trained to segment all instances in an image at very high quality BIBREF5 BIBREF7 We propose a novel method of natural language query autocompletion for estimating instance probabilities conditioned on the image and a user query prefix Our system combines and modifies state of the art components used in query completion language embedding and masked instance segmentation Estimating a broad set of instance probabilities enables selection which is agnostic to the segmentation procedure Figure FIGREF2 shows the architecture of our approach First we extract image features with a pretrained CNN We incorporate the image features into a modified FactorCell LSTM language model along with the user query prefix to complete the query The completed query is then fed into a finetuned BERT embedding to estimate instance probabilities which in turn are used for instance selection We denote a set of objects ok in O where O is the entire set of recognizable object classes The user inputs a prefix p an incomplete query on an image I Given p we autocomplete the intended query q We define the autocompletion query problem in equation DISPLAYFORM3 as the maximization of the probability of a query conditioned on an image where wi in A is the word in position i We pose our instance probability estimation problem given an autocompleted query mathbf q as a multilabel problem where each class can independently exist Let Oq be the set of instances referred to in mathbf q Given hatpk is our estimate of Pok in Oq and yk mathbb 1ok in Oq the instance selection model minimizes the sigmoid crossentropy loss function We utilize the FactorCell FC adaptation of an LSTM with coupled input and forget gates BIBREF4 to autocomplete queries The FactorCell is an LSTM with a contextdependent weight matrix mathbf Wprime mathbf W mathbf A in place of mathbf W Given a character embedding wt in mathbb Re a previous hidden state ht1 in mathbb Rh the adaptation matrix mathbf A is formed by taking the product of the context c with two basis tensors mathbf ZL in mathbb Rmtimes ehtimes r and mathbf ZR in mathbb Rrtimes h times m To adapt the FactorCell BIBREF4 for our purposes we replace user embeddings with a lowdimensional image representation Thus we are able to modify each query completion to be personalized to a specific image representation We extract features from an input image using a CNN pretrained on ImageNet retraining only the last two fully connected layers The image feature vector is fed into the FactorCell through the adaptation matrix We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix We fine tune a pretrained BERT embedding to perform transfer learning for our instance selection task We use a 12layer implementation which has been shown to generalize and perform well when finetuned for new tasks such as question answering text classification and named entity recognition To apply the model to our task we add an additional dense layer to the BERT architecture with 10 dropout mapping the last pooled layer to the object classes in our data We use the Visual Genome VG BIBREF8 and ReferIt BIBREF9 datasets which are suitable for our purposes The VG data contains images region descriptions relationships questionanswers attributes and object instances The region descriptions provide a replacement for queries since they mention various objects in different regions of each image However while some region descriptions are referring phrases some are more similar to descriptions see examples in Table TABREF10 The large number of examples makes the Visual Genome dataset particularly useful for our task The smaller ReferIt dataset consists of referring expressions attached to images which more closely resemble potential user queries of images We train separate models using both datasets For training we aggregated query image pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset Our VG training set consists of 85 of the data 16k images and 740k corresponding region descriptions The Referit training data consists of 9k images and 54k referring expressions The query completion models are trained using a 128 dimensional image representation a rank r64 personalized matrix 24 dimensional character embeddings 512 dimensional LSTM hidden units and a max length of 50 characters per query with Adam at a 5e4 learning rate and a batch size of 32 for 80K iterations The instance selection model is trained using region description object set pairs from the VG dataset resulting in a training set of approximately 173M samples The remaining 300K samples are split into validation and testing Our training procedure for the instance selection model fine tunes all 12 layers of BERT with 32 sample batch sizes for 250K iterations using Adam and performing learning rate warmup for the first 10 of iterations with a target 5e5 learning rate The entire training processes takes around a day on an NVIDIA Tesla P100 GPU Figure 3 shows example results We evaluate query completion by language perplexity and mean reciprocal rank MRR and evaluate instance selection by F1score We compare the perplexity on both sets of test queries using corresponding images vs random noise as context Table TABREF11 shows perplexity on the VG and ReferIt test queries with both corresponding images and random noise The VG and ReferIt datasets have character vocabulary sizes of 89 and 77 respectively Given the matching index t of the true query in the top 10 completions we compute the MRR as sum nfrac1t where we replace the reciprocal rank with 0 if the true query does not appear in the top ten completions We evaluate the VG and ReferIt test queries with varying prefix sizes and compare performance with the corresponding image and random noise as context MRR is influenced by the length of the query as longer queries are more difficult to match Therefore as expected we observe better performance on the ReferIt dataset for all prefix lengths Finally our instance selection achieves an F1score of 07618 over all 2909 instance classes Our results demonstrate that autocompletion based on both language and vision performs better than by using only language and that fine tuning a BERT embedding allows to efficiently rank instances in the image In future work we would like to extract referring expressions using simple grammatical rules to differentiate between referring and nonreferring region descriptions We would also like to combine the VG and ReferIt datasets to train a single model and scale up our datasets to improve query completions </s>
<s>[INST] Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation [/INST] Enabling robots to follow navigation instructions in natural language can facilitate humanrobot interaction across a variety of applications For instance within the service robotics domain robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 For example there are a variety of ways to describe the route in Fig FIGREF4 a Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors For instance assume that a robot counts with a number of primitive navigation behaviors such as enter the room on the left or on right follow the corridor cross the intersection etc Then the fragment advance forward in a navigation instruction could be interpreted as a follow the corridor behavior or as a sequence of follow the corridor interspersed with cross the intersection behaviors depending on the topology of the environment Resolving such ambiguities often requires reasoning about commonsense concepts as well as interpreting spatial information and landmarks eg in sentences such as the room on the left right before the end of the corridor and the room which is in the middle of two vases In this work we pose the problem of interpreting navigation instructions as finding a mapping or grounding of the commands into an executable navigation plan While the plan is typically modeled as a formal specification of lowlevel motions BIBREF2 or a grammar BIBREF3 BIBREF4 we focus specifically on translating instructions to a highlevel navigation plan based on a topological representation of the environment This representation is a behavioral navigation graph as recently proposed by BIBREF5 designed to take advantage of the semantic structure typical of human environments The nodes of the graph correspond to semantically meaningful locations for the navigation task such as kitchens or entrances to rooms in corridors The edges are parameterized visuomotor behaviors that allow a robot to navigate between neighboring nodes as illustrated in Fig FIGREF4 b Under this framework complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route given a known starting location The edges behaviors along this path serve to reach the sometimes implicit destination requested by the user As in BIBREF6 our focus is on the problem of interpreting navigation directions We assume that a robot can realize valid navigation plans according to the graph We contribute a new endtoend model for following directions in natural language under the behavioral navigation framework Inspired by the information retrieval and question answering literature BIBREF7 BIBREF8 BIBREF9 BIBREF10 we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands More specifically the proposed model takes as input user directions in text form the behavioral graph of the environment encoded as INLINEFORM0 node edge node INLINEFORM1 triplets and the initial location of the robot in the graph The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map Fig FIGREF4 c Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans This work also contributes a new dataset of INLINEFORM0 pairs of freeform natural language instructions and highlevel navigation plans This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and to the best of our knowledge it is the first of its kind for behavioral navigation The dataset opens up opportunities to explore datadriven methods for grounding navigation commands into highlevel motion plans We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions We investigate both generalization to new instructions in known and in new environments We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings This section reviews relevant prior work on following navigation instructions Readers interested in an indepth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands constraining language descriptions or using statistical machine translation methods While manually parsing commands is often impractical the first type of approaches are foundational they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 BIBREF13 Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands For example BIBREF14 explored using structured symbolic language phrases for navigation As in this earlier work we are also interested in navigation with a topological map of the environment However we do not process symbolic phrases Our aim is to translate freeform natural language instructions to a navigation plan using information from a highlevel representation of the environment This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions such as at the end of the corridor turn right BIBREF15 Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions These methods aim to automatically discover translation rules from a corpus of data and often leverage the fact that navigation directions are composed of sequential commands For instance BIBREF17 BIBREF4 BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar Likewise BIBREF18 BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem Our approach to machine translation builds on insights from these prior efforts In particular we focus on endtoend learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 BIBREF21 BIBREF22 Similar to BIBREF21 and BIBREF22 we use a sequencetosequence model to enable a mobile agent to follow routes But instead leveraging visual information to output lowlevel navigation commands we focus on using a topological map of the environment to output a highlevel navigation plan This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 BIBREF6 We explore machine translation from the perspective of automatic question answering Following BIBREF8 BIBREF9 our approach uses attention mechanisms to learn alignments between different input modalities In our case the inputs to our model are navigation instructions a topological environment map and the start location of the robot Fig FIGREF4 c Our results show that the map can serve as an effective source of contextual information for the translation task Additionally it is possible to leverage this kind of information in an endtoend fashion Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task This prior knowledge is a topological map in the form of a behavioral navigation graph Fig FIGREF4 b The nodes of the graph correspond to semanticallymeaningful locations for the navigation task and its directed edges are visuomotor behaviors that a robot can use to move between nodes This formulation takes advantage of the rich semantic structure behind manmade environments resulting in a compact route representation for robot navigation Fig FIGREF4 c provides a schematic view of the problem setting The inputs are 1 a navigation graph INLINEFORM0 2 the starting node INLINEFORM1 of the robot in INLINEFORM2 and 3 a set of freeform navigation instructions INLINEFORM3 in natural language The instructions describe a path in the graph to reach from INLINEFORM4 to a potentially implicit destination node INLINEFORM5 Using this information the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 From a supervised learning perspective the goal is then to estimate DISPLAYFORM0 based on a dataset of inputtarget pairs INLINEFORM0 where INLINEFORM1 and INLINEFORM2 respectively The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 We assume no prior linguistic knowledge Thus translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 In general each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes We consider 7 types of semantic locations 11 types of behaviors and 20 different types of landmarks A location in the navigation graph can be a room a lab an office a kitchen a hall a corridor or a bathroom These places are labeled with unique tags such as room1 or lab2 except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes Table TABREF7 lists the navigation behaviors that we consider in this work These behaviors can be described in reference to visual landmarks or objects such as paintings book shelfs tables etc As in Fig FIGREF4 maps might contain multiple landmarks of the same type Please see the supplementary material Appendix A for more details We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an endtoend fashion Our proposed model builds on the sequencetosequence translation model of BIBREF23 which computes a softalignment between a source sequence natural language instructions in our case and the corresponding target sequence navigation behaviors As one of our main contributions we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place Specifically at each step the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity facilitating the grounding of navigation commands Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions The model consists of six layers Embed layer The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixedlength representations The instructions INLINEFORM2 are embedded into a 100dimensional pretrained GloVe vector BIBREF24 Each of the triplet components INLINEFORM3 INLINEFORM4 and INLINEFORM5 of the graph INLINEFORM6 are onehot encoded into vectors of dimensionality INLINEFORM7 where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 respectively Encoder layer The model then uses two bidirectional Gated Recurrent Units GRUs BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 and incorporate contextual cues from the surrounding embeddings in each sequence The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph where INLINEFORM4 is the hidden size of each GRU INLINEFORM5 is the number of words in the instruction INLINEFORM6 and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 Attention layer Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism We use oneway attention because the graph contains information about the whole environment while the instruction has potentially incomplete local information about the route of interest The use of attention provides our model with a twostep strategy to interpret commands This resembles the way people find paths on a map first relevant parts on the map are selected according to their affinity to each of the words in the input instruction attention layer second the selected parts are connected to assemble a valid path decoder layer More formally let INLINEFORM2 INLINEFORM3 be the INLINEFORM4 th row of INLINEFORM5 and INLINEFORM6 INLINEFORM7 the INLINEFORM8 th row of INLINEFORM9 We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 DISPLAYFORM0 where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 th triplet in INLINEFORM6 and the INLINEFORM7 th word in the corresponding input INLINEFORM8 The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 according to their relevance to the corresponding triplet INLINEFORM2 This results in L attention vectors INLINEFORM3 INLINEFORM4 The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 INLINEFORM3 Following BIBREF8 we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information FC layer The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fullyconnected FC layer The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 Decoder layer After the FC layer the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network Without loss of generality consider the INLINEFORM0 th recurrent cell in the GRU network This cell takes two inputs a hidden state vector INLINEFORM1 from the prior cell and a onehot embedding of the previous behavior INLINEFORM2 that was predicted by the model Based on these inputs the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 DISPLAYFORM0 where INLINEFORM0 INLINEFORM1 and INLINEFORM2 are trainable parameters The attention vector INLINEFORM3 in Eq quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 where INLINEFORM7 The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 th GRU cell uses to compute logits for the next behavior DISPLAYFORM0 with INLINEFORM0 trainable parameters Note that INLINEFORM1 includes a value for each of the predefined behaviors in the graph INLINEFORM2 as well as for a special stop symbol to identify the end of the output sequence Output layer The final layer of the model searches for a valid sequence of robot behaviors based on the robots initial node the connectivity of the graph INLINEFORM0 and the output logits from the previous decoder layer Again without loss of generality consider the INLINEFORM1 th behavior INLINEFORM2 that is finally predicted by the model The search for this behavior is implemented as DISPLAYFORM0 with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments each with 6 to 65 rooms To the best of our knowledge this is the first benchmark for comparing translation models in the context of behavioral robot navigation As shown in Table TABREF16 the dataset consists of 8066 pairs of freeform natural language instructions and navigation plans for training This training data was collected from 88 unique simulated environments totaling 6064 distinct navigation plans 2002 plans have two different navigation instructions each the rest has one The dataset contains two test set variants While the dataset was collected with simulated environments no structure was imposed on the navigation instructions while crowdsourcing data Thus many instructions in our dataset are ambiguous Moreover the order of the behaviors in the instructions is not always the same For instance a person said turn right and advance to describe part of a route while another person said go straight after turning right in a similar situation The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial See Appendix A of the supplementary material for additional details on our data collection effort This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language We provide both quantitative and qualitative results While computing evaluation metrics we only consider the behaviors present in the route because they are sufficient to recover the highlevel navigation plan from the graph Our metrics treat each behavior as a single token For example the sample plan R1 oor C1 cf C1 lt C0 cf C0 iol O3 is considered to have 5 tokens each corresponding to one of its behaviors oor cf lt cf iol In this plan R1C1 C0 and O3 are symbols for locations nodes in the graph We compare the performance of translation approaches based on four metrics alignleftleftmargin0emlabelsep04emfont As in BIBREF20 EM is 1 if a predicted plan matches exactly the ground truth otherwise it is 0 The harmonic average of the precision and recall over all the test set BIBREF26 The minimum number of insertions deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 GM is 1 if a predicted plan reaches the ground truth destination even if the full sequence of behaviors does not match exactly the ground truth Otherwise GM is 0 We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deeplearning models alignleftleftmargin0emlabelsep04emfont The baseline approach is based on BIBREF20 It divides the task of interpreting commands for behavioral navigation into two steps path generation and path verification For path generation this baseline uses a standard sequencetosequence model augmented with an attention mechanism similar to BIBREF23 BIBREF6 For path verification the baseline uses depthfirst search to find a route in the graph that matches the sequence of predicted behaviors If no route matches perfectly the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path To test the impact of using the behavioral graphs as an extra input to our translation model we implemented a version of our approach that only takes natural language instructions as input In this ablation model the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer This model does not have the attention and FC layers described in Sec SECREF4 nor uses the masking function in the output layer This model is the same as the previous Ablation model but with the masking function in the output layer We preprocessed the inputs to the various models that are considered in our experiment In particular we lowercased tokenized spellchecked and lemmatized the input instructions in textform using WordNet BIBREF28 We also truncated the graphs to a maximum of 300 triplets and the navigational instructions to a maximum of 150 words Only 64 54 of the unique graphs in the training validation set had more than 300 triplets and less than 015 of the natural language instructions in these sets had more than 150 tokens The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments In general we used 125 of the training set as validation for choosing models hyperparameters In particular we used dropout after the encoder and the fullyconnected layers of the proposed model to reduce overfitting Best performance was achieved with a dropout rate of 05 and batch size equal to 256 We also used scheduled sampling BIBREF29 at training time for all models except the baseline We input the triplets from the graph to our proposed model in alphabetical order and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence We hypothesized that such rearrangement would help identify the starting location node of the robot in the graph In turn this could facilitate the prediction of correct output sequences In the remaining of the paper we refer to models that were provided a rearranged graph beginning with the starting location of the robot as models with Ordered Triplets Table TABREF28 shows the performance of the models considered in our evaluation on both test sets The next two sections discuss the results in detail First we can observe that the final model Ours with Mask and Ordered Triplets outperforms the Baseline and Ablation models on all metrics in previously seen environments The difference in performance is particularly evident for the Exact Match and Goal Match metrics with our model increasing accuracy by 35 and 25 in comparison to the Baseline and Ablation models respectively These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an endtoend fashion is beneficial We can also observe from Table TABREF28 that the masking function of Eq EQREF12 tends to increase performance in the TestRepeated Set by constraining the output sequence to a valid set of navigation behaviors For the Ablation model using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy For the proposed model with or without reordering the graph triplets the increase in accuracy is around INLINEFORM1 Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan irrespective of its specific position in the output sequence The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance The rearrangement of the graph triplets also helps to reduce ED and increase F1 Lastly it is worth noting that our proposed model last row of Table TABREF28 outperforms all other models in previously seen environments In particular we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models The previous section evaluated model performance on new instructions and corresponding navigation plans for environments that were previously seen at training time Here we examine whether the trained models succeed on environments that are completely new The evaluation on the TestNew Set helps understand the generalization capabilities of the models under consideration This experiment is more challenging than the one in the previous section as can be seen in performance drops in Table TABREF28 for the new environments Nonetheless the insights from the previous section still hold masking in the output layer and reordering the graph triplets tend to increase performance Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions our model still outperforms the baselines by a clear margin in new environments For instance the difference between our model and the second best model in the TestNew set is about INLINEFORM0 EM and GM Note that the average number of actions in the ground truth output sequences is 707 for the TestNew set Our models predictions are just INLINEFORM1 edits off on average from the correct navigation plans This section discusses qualitative results to better understand how the proposed model uses the navigation graph We analyze the evolution of the attention weights INLINEFORM0 in Eq to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions Fig FIGREF33 b shows an example of the resulting attention map for the case of a correct prediction In the Figure the attention map is depicted as a scaled and normalized 2D array of color codes Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 Consequently each row in the array represents a triplet in the corresponding behavioral graph This graph consists of 72 triplets for Fig FIGREF33 b We observe a locality effect associated to the attention coefficients corresponding to high values bright areas in each column of Fig FIGREF33 b This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step We include additional attention visualizations in the supplementary Appendix including cases where the dynamics of the attention distribution are harder to interpret All the routes in our dataset are the shortest paths from a start location to a given destination Thus we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing suboptimal paths One such example is shown in Fig FIGREF37 where the blue route shortest path and the red route alternative path are described by leftmargin labelsep02em itemsep0em Go out the office and make a left Turn right at the corner and go down the hall Make a right at the next corner and enter the kitchen in front of table Exit the room 0 and turn right go to the end of the corridor and turn left go straight to the end of the corridor and turn left again After passing bookshelf on your left and table on your right Enter the kitchen on your right For both routes the proposed model was able to predict the correct sequence of navigation behaviors This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph Other examples on the prediction of subobtimal paths are described in the Appendix This work introduced behavioral navigation through freeform natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics This problem has a range of interesting crossdomain applications including information retrieval We proposed an endtoend system to translate user instructions to a highlevel navigation plan Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands As part of this effort we contributed a new dataset of 11051 pairs of user instructions and navigation plans from 100 different environments Our model achieved the best performance in this dataset in comparison to a twostep baseline approach for interpreting navigation instructions and a sequencetosequence model that does not consider the behavioral graph Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of freeform navigation instructions Overall our approach demonstrated practical form of learning for a complex and useful task In future work we are interested in investigating mechanisms to improve generalization to new environments For example pointer and graph networks BIBREF30 BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors The Toyota Research Institute TRI provided funds to assist with this research but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity This work is also partially funded by Fondecyt grant 1181739 Conicyt Chile The authors would also like to thank Gabriel Seplveda for his assistance with parts of this project </s>
<s>[INST] Analysis of Risk Factor Domains in Psychosis Patient Health Records [/INST] Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 BIBREF1 and affect approximately 254 of the population BIBREF2 BIBREF3 making them one of the leading causes of disability worldwide BIBREF4 A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 Readmissions are disruptive for patients and families and are a key driver of rising healthcare costs BIBREF6 BIBREF7 Reducing readmission risk is therefore a major unmet need of psychiatric care Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures In psychiatry traditional strategies to study readmission risk factors rely on clinical observation and manual retrospective chart review BIBREF8 BIBREF9 This approach although benefitting from clinical expertise does not scale well for large data sets is effortintensive and lacks automation An efficient more robust and cheaper NLPbased alternative approach has been developed and met with some success in other medical fields BIBREF10 However this approach has seldom been applied in psychiatry because of the unique characteristics of psychiatric medical record content There are several challenges for topic extraction when dealing with clinical narratives in psychiatric EHRs First the vocabulary used is highly varied and contextsensitive A patient may report feeling really great and excited symptoms of mania without any explicit mention of keywords that differ from everyday vocabulary Also many technical terms in clinical narratives are multiword expressions MWEs such as obsessive body image linear thinking short attention span or panic attack These phrasemes are comprised of words that in isolation do not impart much information in determining relatedness to a given topic but do in the context of the expression Second the narrative structure in psychiatric clinical narratives varies considerably in how the same phenomenon can be described Hallucinations for example could be described as the patient reports auditory hallucinations or the patient has been hearing voices for several months amongst many other possibilities Third phenomena can be directly mentioned without necessarily being relevant to the patient specifically Psychosis patient discharge summaries for instance can include future treatment plans eg Prevent relapse of a manic or major depressive episode Prevent recurrence of psychosis containing vocabulary that at the wordlevel seem strongly correlated with readmission risk Yet at the paragraphlevel these do not indicate the presence of a readmission risk factor in the patient and in fact indicate the absence of a risk factor that was formerly present Lastly given the complexity of phenotypic assessment in psychiatric illnesses patients with psychosis exhibit considerable differences in terms of illness and symptom presentation The constellation of symptoms leads to various diagnoses and comorbidities that can change over time including schizophrenia schizoaffective disorder bipolar disorder with psychosis and substance use induced psychosis Thus the lexicon of words and phrases used in EHRs differs not only across diagnoses but also across patients and time Taken together these factors make topic extraction a difficult task that cannot be accomplished by keyword search or other simple textmining techniques To identify specific risk factors to focus on we not only reviewed clinical literature of risk factors associated with readmission BIBREF11 BIBREF12 but also considered research related to functional remission BIBREF13 forensic risk factors BIBREF14 and consulted clinicians involved with this project Seven risk factor domains Appearance Mood Interpersonal Occupation Thought Content Thought Process and Substance were chosen because they are clinically relevant consistent with literature replicable across data sets explainable and implementable in NLP algorithms In our present study we evaluate multiple approaches to automatically identify which risk factor domains are associated with which paragraphs in psychotic patient EHRs We perform this study in support of our longterm goal of creating a readmission risk classifier that can aid clinicians in targeting individual treatment interventions and assessing patient risk of harm eg suicide risk homicidal risk Unlike other contemporary approaches in machine learning we intend to create a model that is clinically explainable and flexible across training data while maintaining consistent performance To incorporate clinical expertise in the identification of risk factor domains we undertake an annotation project detailed in section 31 We identify a test set of over 1600 EHR paragraphs which a team of three domainexpert clinicians annotate paragraphbyparagraph for relevant risk factor domains Section 32 describes the results of this annotation task We then use the gold standard from the annotation project to assess the performance of multiple neural classification models trained exclusively on Term Frequency Inverse Document Frequency TFIDF vectorized EHR data described in section 4 To further improve the performance of our model we incorporate domainrelevant MWEs identified using all inhouse data McCoy et al mccoy2015clinical constructed a corpus of web data based on the Research Domain Criteria RDoC BIBREF15 and used this corpus to create a vector space document similarity model for topic extraction They found that the negative valence and social RDoC domains were associated with readmission Using web data in this case data retrieved from the Bing API to train a similarity model for EHR texts is problematic since it differs from the target data in both structure and content Based on reconstruction of the procedure we conclude that many of the informative MWEs critical to understanding the topics of paragraphs in EHRs are not captured in the web data Additionally RDoC is by design a generalized research construct to describe the entire spectrum of mental disorders and does not include domains that are based on observation or causes of symptoms Important indicators within EHRs of patient health like appearance or occupation are not included in the RDoC constructs Rumshisky et al rumshisky2016predicting used a corpus of EHRs from patients with a primary diagnosis of major depressive disorder to create a 75topic LDA topic model that they then used in a readmission prediction classifier pipeline Like with McCoy et al mccoy2015clinical the data used to train the LDA model was not ideal as the generalizability of the data was narrow focusing on only one disorder Their model achieved readmission prediction performance with an area under the curve of 784 compared to a baseline of 618 To perform clinical validation of the topics derived from the LDA model they manually evaluated and annotated the topics identifying the most informative vocabulary for the top ten topics With their training data they found the strongest coherence occurred in topics involving substance use suicidality and anxiety disorders But given the unsupervised nature of the LDA clustering algorithm the topic coherence they observed is not guaranteed across data sets 2The vast majority of patients in our target cohort are dependents on a parental private health insurance plan Our target data set consists of a corpus of discharge summaries admission notes individual encounter notes and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital OnTrackTM is an outpatient program focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis The length of time in the program varies depending on patient improvement and insurance coverage with an average of two to three years The program focuses primarily on early intervention via individual therapy group therapy medication evaluation and medication management See Table TABREF2 for a demographic breakdown of the 220 patients for which we have so far extracted approximately 240000 total EHR paragraphs spanning from 2011 to 2014 using Meditech the software employed by McLean for storing and organizing EHR data These patients are part of a larger research cohort of approximately 1800 psychosis patients which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic cognitive neurobiological and functional outcome data from this cohort We also use an additional data set for training our vector space model comprised of EHR texts queried from the Research Patient Data Registry RPDR a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network These records are highly comparable in style and vocabulary to our target data set The corpus consists of discharge summaries encounter notes and visit notes from approximately 30000 patients admitted to the systems hospitals with psychiatric diagnoses and symptoms This breadth of data captures a wide range of clinical narratives creating a comprehensive foundation for topic extraction After using the RPDR query tool to extract EHR paragraphs from the RPDR database we created a training corpus by categorizing the extracted paragraphs according to their risk factor domain using a lexicon of 120 keywords that were identified by the clinicians involved in this project Certain domains particularly those involving thoughts and other abstract concepts are often identifiable by MWEs rather than single words The same clinicians who identified the keywords manually examined the bigrams and trigrams with the highest TFIDF scores for each domain in the categorized paragraphs identifying those which are conceptually related to the given domain We then used this lexicon of 775 keyphrases to identify more relevant training paragraphs in RPDR and treat them as nonstemmed unigrams when generating the matrix By converting MWEs such as shortened attention span unusual motor activity wideranging affect or linear thinking to nonstemmed unigrams the TFIDF score and therefore the predictive value of these terms is magnified In total we constructed a corpus of roughly 100000 paragraphs consisting of 7000000 tokens for training our model In order to evaluate our models we annotated 1654 paragraphs selected from the 240000 paragraphs extracted from Meditech with the clinically relevant domains described in Table TABREF3 The annotation task was completed by three licensed clinicians All paragraphs were removed from the surrounding EHR context to ensure annotators were not influenced by the additional contextual information Our domain classification models consider each paragraph independently and thus we designed the annotation task to mirror the information available to the models The annotators were instructed to label each paragraph with one or more of the seven risk factor domains In instances where more than one domain was applicable annotators assigned the domains in order of prevalence within the paragraph An eighth label Other was included if a paragraph was ambiguous uninterpretable or about a domain not included in the seven risk factor domains eg nonpsychiatric medical concerns and lab results The annotations were then reviewed by a team of two clinicians who adjudicated collaboratively to create a gold standard The gold standard and the clinicianidentified keywords and MWEs have received IRB approval for release to the community They are available as supplementary data to this paper Interannotator agreement IAA was assessed using a combination of Fleisss Kappa a variant of Scotts Pi that measures pairwise agreement for annotation tasks involving more than two annotators BIBREF16 and Cohens MultiKappa as proposed by Davies and Fleiss davies1982measuring Table TABREF6 shows IAA calculations for both overall agreement and agreement on the first most important domain only Following adjudication accuracy scores were calculated for each annotator by evaluating their annotations against the gold standard Overall agreement was generally good and aligned almost exactly with the IAA on the first domain only Out of the 1654 annotated paragraphs 671 41 had total agreement across all three annotators We defined total agreement for the task as a settheoretic complete intersection of domains for a paragraph identified by all annotators 98 of paragraphs in total agreement involved one domain Only 35 paragraphs had total disagreement which we defined as a settheoretic null intersection between the three annotators An analysis of the 35 paragraphs with total disagreement showed that nearly 30 included the term bluntedrestricted In clinical terminology these terms can be used to refer to appearance affect mood or emotion Because the paragraphs being annotated were extracted from larger clinical narratives and examined independently of any surrounding context it was difficult for the annotators to determine the most appropriate domain This lack of contextual information resulted in each annotator using a different default label Appearance Mood and Other During adjudication Other was decided as the most appropriate label unless the paragraph contained additional content that encompassed other domains as it avoids making unnecessary assumptions 3Suicidal ideation 4Homicidal ideation 5Ethyl alcohol and ethanol A Fleisss Kappa of 0575 lies on the boundary between Moderate and Substantial agreement as proposed by Landis and Koch landis1977measurement This is a promising indication that our risk factor domains are adequately defined by our present guidelines and can be employed by clinicians involved in similar work at other institutions The fourth column in Table TABREF6 Mean Accuracy was calculated by averaging the three annotator accuracies as evaluated against the gold standard This provides us with an informative baseline of human parity on the domain classification task 6Rectified Linear Units INLINEFORM0 BIBREF17 7Adaptive Moment Estimation BIBREF18 Figure FIGREF8 illustrates the data pipeline for generating our training and testing corpora and applying them to our classification models We use the TfidfVectorizer tool included in the scikitlearn machine learning toolkit BIBREF19 to generate our TFIDF vector space models stemming tokens with the Porter Stemmer tool provided by the NLTK library BIBREF20 and calculating TFIDF scores for unigrams bigrams and trigrams Applying Singular Value Decomposition SVD to the TFIDF matrix we reduce the vector space to 100 dimensions which Zhang et al zhang2011comparative found to improve classifier performance Starting with the approach taken by McCoy et al mccoy2015clinical who used aggregate cosine similarity scores to compute domain similarity directly from their TFIDF vector space model we extend this method by training a suite of threelayer multilayer perceptron MLP and radial basis function RBF neural networks using a variety of parameters to compare performance We employ the Keras deep learning library BIBREF21 using a TensorFlow backend BIBREF22 for this task The architectures of our highest performing MLP and RBF models are summarized in Table TABREF7 Prototype vectors for the nodes in the hidden layer of our RBF model are selected via kmeans clustering BIBREF23 on each domain paragraph megadocument individually The RBF transfer function for each hidden layer node is assigned the same width which is based off the maximum Euclidean distance between the centroids that were computed using kmeans To prevent overfitting to the training data we utilize a dropout rate BIBREF24 of 02 on the input layer of all models and 05 on the MLP hidden layer Since our classification problem is multiclass multilabel and openworld we employ seven nodes with sigmoid activations in the output layer one for each risk factor domain This allows us to identify paragraphs that fall into more than one of the seven domains as well as determine paragraphs that should be classified as Other Unlike the traditionally used softmax activation function which is ideal for singlelabel closedworld classification tasks sigmoid nodes output class likelihoods for each node independently without the normalization across all classes that occurs in softmax We find that the risk factor domains vary in the degree of homogeneity of language used and as such certain domains produce higher similarity scores on average than others To account for this we calculate threshold similarity scores for each domain using the formula minavgsim INLINEFORM0 INLINEFORM1 sim where INLINEFORM2 is standard deviation and INLINEFORM3 is a constant which we set to 078 for our MLP model and 12 for our RBF model through trialanderror Employing a generalized formula as opposed to manually identifying threshold similarity scores for each domain has the advantage of flexibility in regards to the target data which may vary in average similarity scores depending on its similarity to the training data If a paragraph does not meet threshold on any domain it is classified as Other Table TABREF9 shows the performance of our models on classifying the paragraphs in our gold standard To assess relative performance of feature representations we also include performance metrics of our models without MWEs Because this is a multilabel classification task we use macroaveraging to compute precision recall and F1 scores for each paragraph in the testing set In identifying domains individually our models achieved the highest perdomain scores on Substance F1 INLINEFORM0 08 and the lowest scores on Interpersonal and Mood F1 INLINEFORM1 05 We observe a consistency in perdomain performance rankings between our MLP and RBF models The wide variance in perdomain performance is due to a number of factors Most notably the training examples we extracted from RPDR while very comparable to our target OnTrackTM data may not have an adequate variety of content and range of vocabulary Although using keyword and MWE matching to create our training corpus has the advantage of being significantly less labor intensive than manually labeling every paragraph in the corpus it is likely that the homogeneity of language used in the training paragraphs is higher than it would be otherwise Additionally all of the paragraphs in the training data are assigned exactly one risk factor domain even if they actually involve multiple risk factor domains making the clustering behavior of the paragraphs more difficult to define Figure FIGREF10 illustrates the distribution of paragraphs in vector space using 2component Linear Discriminant Analysis LDA BIBREF26 Despite prior research indicating that similar classification tasks to ours are more effectively performed by RBF networks BIBREF27 BIBREF28 BIBREF29 we find that a MLP network performs marginally better with significantly less preprocessing ie kmeans and width calculations involved We can see in Figure FIGREF10 that Thought Process Appearance Substance and to a certain extent Occupation clearly occupy specific regions whereas Interpersonal Mood and Thought Content occupy the same noisy region where multiple domains overlap Given that similarity is computed using Euclidean distance in an RBF network it is difficult to accurately classify paragraphs that fall in regions occupied by multiple risk factor domain clusters since prototype centroids from the risk factor domains will overlap and be less differentiable This is confirmed by the results in Table TABREF9 where the differences in performance between the RBF and MLP models are more pronounced in the three overlapping domains 0496 vs 0448 for Interpersonal 0530 vs 0496 for Mood and 0721 vs 0678 for Thought Content compared to the nonoverlapping domains 0564 vs 0566 for Appearance 0592 vs 0598 for Occupation 0797 vs 0792 for Substance and 0635 vs 0624 for Thought Process We also observe a similarity in the words and phrases with the highest TFIDF scores across the overlapping domains many of the Thought Content words and phrases with the highest TFIDF scores involve interpersonal relations eg fear surrounding daughter father family history familial conflict and there is a high degree of similarity between highscoring words for Mood eg meets anxiety criteria cope with mania ocd8 and Thought Content eg mania feels anxious feels exhausted 8Obsessivecompulsive disorder MWEs play a large role in correctly identifying risk factor domains Factoring them into our models increased classification performance by 15 a marked improvement over our baseline model This aligns with our expectations that MWEs comprised of a quotidian vocabulary hold much more clinical significance than when the words in the expressions are treated independently Threshold similarity scores also play a large role in determining the precision and recall of our models higher thresholds lead to a smaller number of false positives and a greater number of false negatives for each risk factor domain Conversely more paragraphs are incorrectly classified as Other when thresholds are set higher Since our classifier will be used in future work as an early step in a data analysis pipeline for determining readmission risk misclassifying a paragraph with an incorrect risk factor domain at this stage can lead to greater inaccuracies at later stages Paragraphs misclassified as Other however will be discarded from the data pipeline Therefore we intentionally set a conservative threshold where only the most confidently labeled paragraphs are assigned membership in a particular domain To achieve our goal of creating a framework for a readmission risk classifier the present study performed necessary evaluation steps by updating and adding to our model iteratively In the first stage of the project we focused on collecting the data necessary for training and testing and on the domain classification annotation task At the same time we began creating the tools necessary for automatically extracting domain relevance scores at the paragraph and document level from patient EHRs using several forms of vectorization and topic modeling In future versions of our risk factor domain classification model we will explore increasing robustness through sequence modeling that considers more contextual information Our current feature set for training a machine learning classifier is relatively small consisting of paragraph domain scores bagofwords length of stay and number of previous admissions but we intend to factor in many additional features that extend beyond the scope of the present study These include a deeper analysis of clinical narratives in EHRs our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain This will involve a series of annotation tasks that will allow us to generate lexiconbased and corpusbased sentiment analysis tools We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time We will also take into account structured data that have been collected on the target cohort throughout the course of this study such as brain based electrophysiological EEG biomarkers structural brain anatomy from MRI scans gray matter volume cortical thickness cortical surfacearea social and role functioning assessments personality assessment NEOFFI9 and various symptom scales PANSS10 MADRS11 YMRS12 For each feature we consider adding we will evaluate the performance of the classifier with and without the feature to determine its contribution as a predictor of readmission This work was supported by a grant from the National Institute of Mental Health grant no 5R01MH109687 to MeiHua Hall We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments 9NEO FiveFactor Inventory BIBREF30 10Positive and Negative Syndrome Scale BIBREF31 11MontgomeryAsperg Depression Rating Scale BIBREF32 12Young Mania Rating Scale BIBREF33 </s>
<s>[INST] Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation [/INST] Neural machine translation NMT has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0 BIBREF1 BIBREF2 However in consideration of time cost and space capacity the NMT model generally employs a limitedsize vocabulary that only contains the topN highest frequency words commonly in the range of 30K to 80K BIBREF3 which leads to the OutofVocabulary OOV problem following with inaccurate and terrible translation results Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words For the lowresource and sourceside morphologicallyrich machine translation tasks such as TurkishEnglish and UyghurChinese all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus Both the Turkish and Uyghur are agglutinative and highlyinflected languages in which the word is formed by suffixes attaching to a stem BIBREF4 The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as stem suffix1 suffix2 suffixN A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case number gender and so on The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology so when translating from the agglutinative languages many words are unseen at training time Moreover due to the semantic context the same word generally has different segmentation forms in the training corpus For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT we propose a morphological word segmentation method on the sourceside of TurkishEnglish and UyghurChinese machine translation tasks which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training In this paper we investigate and compare the following segmentation strategies Stem with combined suffix Stem with singular suffix Byte Pair Encoding BPE BPE on stem with combined suffix BPE on stem with singular suffix The latter two segmentation strategies are our newly proposed methods Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 12 and 25 BLEU points on TurkishEnglish and UyghurChinese machine translation tasks over the strong baseline of pure BPE method respectively indicating that it can provide better translation performance for the NMT model We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding BPE BIBREF5 After word segmentation we additionally add an specific symbol behind each separated subword unit which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively The sentence examples with different segmentation strategies for TurkishEnglish machine translation task are shown in Table 1 The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes Both of the stem and suffix are called morphemes and they are the smallest functional unit in agglutinative languages Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6 Morpheme segmentation can segment the complex word into morpheme units of stem and suffix This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highlyinflected languages In this segmentation strategy each word is segmented into a stem unit and a combined suffix unit We add behind the stem unit and add behind the combined suffix unit We denote this method as SCS The segmented word can be denoted as two parts of stem and suffix1suffix2suffixN If the original word has no suffix unit the word is treated as its stem unit All the following segmentation strategies will follow this rule In this segmentation strategy each word is segmented into a stem unit and a sequence of suffix units We add behind the stem unit and add behind each singular suffix unit We denote this method as SSS The segmented word can be denoted as a sequence of stem suffix1 suffix2 until suffixN BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units in which the most frequent character sequences are merged iteratively Frequent character ngrams are eventually merged into a single symbol This is based on the intuition that various word classes are translatable via smaller units than words This method making the NMT model capable of openvocabulary translation which can generalize to translate and produce new words on the basis of these subword units The BPE algorithm can be run on the dictionary extracted from a training text with each word being weighted by its frequency In this segmentation strategy we add behind each nofinal subword unit of the segmented word The problem with morpheme segmentation is that the vocabulary of stem units is still very large which leads to many rare and unknown words at the training time The problem with BPE is that it do not consider the morpheme boundaries inside words which might cause a loss of morphological properties and semantic information Hence on the analyses of the above popular word segmentation methods we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT Compared with the sentence of word surface forms the corresponding sentence of stem units only contains the structure information without considering morphological information which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8 Therefore we learn a BPE model on the stem units in the training corpus rather than the words and then apply it on the stem unit of each word after morpheme segmentation In this segmentation strategy firstly we segment each word into a stem unit and a combined suffix unit as SCS Secondly we apply BPE on the stem unit Thirdly we add behind the combined suffix unit If the stem unit is not segmented we add behind itself Otherwise we add behind each nofinal subword of the segmented stem unit We denote this method as BPESCS In this segmentation strategy firstly we segment each word into a stem unit and a sequence of suffix units as SSS Secondly we apply BPE on the stem unit Thirdly we add behind each singular suffix unit If the stem unit is not segmented we add behind itself Otherwise we add behind each nofinal subword of the segmented stem unit We denote this method as BPESSS Following BIBREF9 we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training and use the newsdev2016 from Workshop on Machine Translation in 2016 WMT2016 for validation The test data are newstest2016 and newstest2017 We use the news data from China Workshop on Machine Translation in 2017 CWMT2017 for model training validation and test We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units We employ the python toolkits of jieba for Chinese word segmentation We apply BPE on the targetside words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens The training corpus statistics of TurkishEnglish and UyghurChinese machine translation tasks are shown in Table 2 and Table 3 respectively We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE BPESCS and BPESSS segmentation strategies on the same scale We will elaborate the number settings for our proposed word segmentation strategies in this section In the TurkishEnglish machine translation task for the pure BPE strategy we set the number of merge operations on the words to 35K set the number of merge operations on the stem units for BPESCS strategy to 15K and set the number of merge operations on the stem units for BPESSS strategy to 25K In the UyghurChinese machine translation task for the pure BPE strategy we set the number of merge operations on the words to 38K set the number of merge operations on the stem units for BPESCS strategy to 10K and set the number of merge operations on the stem units for BPESSS strategy to 35K The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively According to Table 4 and Table 5 we can find that both the Turkish and Uyghur have a very large vocabulary even in the lowresource training corpus So we propose the morphological word segmentation strategies of BPESCS and BPESSS that additionally applying BPE on the stem units after morpheme segmentation which not only consider the morphological properties but also eliminate the rare and unknown words We employ the Transformer model BIBREF13 with selfattention mechanism architecture implemented in Sockeye toolkit BIBREF14 Both the encoder and decoder have 6 layers We set the number of hidden units to 512 the number of heads for selfattention to 8 the source and target word embedding size to 512 and the number of hidden units in feedforward layers to 2048 We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences and we shuffle all the training data at each epoch The label smoothing is set to 01 We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity Decoding is performed by beam search with beam size of 5 To effectively evaluate the machine translation quality we report casesensitive BLEU score with standard tokenization and character ngram ChrF3 score In this paper we investigate and compare morpheme segmentation BPE and our proposed morphological segmentation strategies on the low resource and morphologicallyrich agglutinative languages Experimental results of TurkishEnglish and UyghurChinese machine translation tasks are shown in Table 6 and Table 7 respectively According to Table 6 and Table 7 we can find that both the BPESCS and BPESSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method Especially the BPESSS strategy is better and it achieves significant improvement of up to 12 BLEU points on TurkishEnglish machine translation task and 25 BLEU points on UyghurChinese machine translation task Furthermore we also find that the translation performance of our proposed segmentation strategy on TurkishEnglish machine translation task is not obvious than UyghurChinese machine translation task the probable reasons are the training corpus of TurkishEnglish consists of talk and news data while most of the talk data are short informal sentences compared with the news data which cannot provide more language information for the NMT model Moreover the test corpus consists of news data so due to the data domain is different the improvement of machine translation quality is limited In addition we estimate how the number of merge operations on the stem units for BPESSS strategy effects the machine translation quality Experimental results are shown in Table 8 and Table 9 We find that the number of 25K for Turkish 30K and 35K for Uyghur maximizes the translation performance The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units which makes better generalization over the morphologicallyrich words The NMT system is typically trained with a limited vocabulary which creates bottleneck on translation accuracy and generalization capability Many word segmentation methods have been proposed to cope with the above problems which consider the morphological properties of different languages Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation but they neglected the morphological varieties between subword units which might result in ambiguous translation results SanchezCartagena and Toral BIBREF17 proposed a rulebased morphological word segmentation for Finnish which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles Huck BIBREF18 explored targetside segmentation method for German which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results Ataman et al BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish which optimizes the segmentation complexity with constraint on the vocabulary based on a categorybased hidden markov model HMM Our work is closely related to their idea while ours are more simple and realizable Tawfik et al BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources which is very convenient for word segmentation on the lowresource and morphologicallyrich agglutinative languages In this paper we investigate morphological segmentation strategies on the lowresource and morphologicallyrich languages of Turkish and Uyghur Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT And the BPESSS strategy achieves the best machine translation performance as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training Moreover we also estimate how the number of merge operations on the stem units for BPESSS strategy effects the translation quality and we find that an appropriate vocabulary size is more useful for the NMT model In future work we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the lowresource and morphologicallyrich languages This work is supported by the National Natural Science Foundation of China the Open Project of Key Laboratory of Xinjiang Uygur Autonomous Region the Youth Innovation Promotion Association of the Chinese Academy of Sciences and the Highlevel Talents Introduction Project of Xinjiang Uyghur Autonomous Region </s>
<s>[INST] Deja-vu: Double Feature Presentation and Iterated Loss in Deep Transformer Networks [/INST] In this paper we propose the processing of features not only in the input layer of a deep network but in the intermediate layers as well We are motivated by a desire to enable a neural network acoustic model to adaptively process the features depending on partial hypotheses and noise conditions Many previous methods for adaptation have operated by linearly transforming either input features or intermediate layers in a two pass process where the transform is learned to maximize the likelihood of some adaptation data BIBREF0 BIBREF1 BIBREF2 Other methods have involved characterizing the input via factor analysis or ivectors BIBREF3 BIBREF4 Here we suggest an alternative approach in which adaptation can be achieved by representing the feature stream at an intermediate layer of the network that is constructed to be correlated with the ultimate graphemic or phonetic output of the system We present this work in the context of Transformer networks BIBREF5 Transformers have become a popular deep learning architecture for modeling sequential datasets showing improvements in many tasks such as machine translation BIBREF5 language modeling BIBREF6 and autoregressive image generation BIBREF7 In the speech recognition field Transformers have been proposed to replace recurrent neural network RNN architectures such as LSTMs and GRUs BIBREF8 A recent survey of Transformers in many speech related applications may be found in BIBREF9 Compared to RNNs Transformers have several advantages specifically an ability to aggregate information across all the timesteps by using a selfattention mechanism Unlike RNNs the hidden representations do not need to be computed sequentially across time thus enabling significant efficiency improvements via parallelization In the context of Transformer module secondary feature analysis is enabled through an additional midnetwork transformer module that has access both to previouslayer activations and the raw features To implement this model we apply the objective function several times at the intermediate layers to encourage the development of phonetically relevant hypotheses Interestingly we find that the iterated use of an auxiliary loss in the intermediate layers significantly improves performance by itself as well as enabling the secondary feature analysis This paper makes two main contributions We present improvements in the basic training process of deep transformer networks specifically the iterated use of CTC or CE in intermediate layers and We show that an intermediatelayer attention model with access to both previouslayer activations and raw feature inputs can significantly improve performance We evaluate our proposed model on Librispeech and a largescale video dataset From our experimental results we observe 1020 relative improvement on Librispeech and 3211 on the video dataset A transformer network BIBREF5 is a powerful approach to learning and modeling sequential data A transformer network is itself constructed with a series of transformer modules that each perform some processing Each module has a selfattention mechanism and several feedforward layers enabling easy parallelization over timesteps compared to recurrent models such as RNNs or LSTMs BIBREF10 We use the architecture defined in BIBREF5 and provide only a brief summary below Assume we have an input sequence that is of length S X x1xS Each xi is itself a vector of activations A transformer layer encodes X into a corresponding output representation Z z1zS as described below Transformers are built around the notion of a selfattention mechanism that is used to extract the relevant information for each timestep s from all timesteps 1S in the preceding layer Self attention is defined in terms of a Query Key Value triplet lbrace Q K Vrbrace in mathbb RS times dk In selfattention the queries keys and values are the columns of the input itself x1xS The output activations are computed as Transformer modules deploy a multiheaded version of selfattention As described in BIBREF5 this is done by linearly projecting the queries keys and values P times with different learned linear projections Selfattention is then applied to each of these projected versions of Queries Keys and Values These are concatenated and once again projected resulting in the final values We refer to the input projection matrices as WpQ WpK WpV and to the output projection as WO Multihead attention is implemented as Here WpQ WpK WpV in mathbb Rdk times dm dm dk P and WO in mathbb RPdm times dk After selfattention a transformer module applies a series of linear layer RELU layernorm and dropout operations as well as the application of residual connections The full sequence of processing is illustrated in Figure FIGREF3 In this section we present our proposal for allowing the network to reconsider the input features in the light of intermediate processing We do this by again deploying a selfattention mechanism to combine the information present in the original features with the information available in the activations of an intermediate layer As described earlier we calculate the output posteriors and auxiliary loss at the intermediate layer as well The overall architecture is illustrated in Figure FIGREF6 Here we have used a 24 layer network with feature representation after the 12th layer In the following subsections we provide detail on the feature representation mechanism and iterated loss calculation We process the features in the intermediate later by concatenating a projection of the original features with a projection of previous hidden layer activations and then applying selfattention First we project both the the input and intermediate layer features Z0 in mathbb RS times d0 Zk in mathbb RS times dk apply layer normalization and concatenate with position encoding where d0 is the input feature dimension dk is the Transformer output dimension W1 in mathbb Rd0 times dc W2 in mathbb Rdk times dc and E in mathbb RS times de is a sinusoidal position encoding BIBREF5 After we project both information sources to the same dimensionality we merge the information by using timeaxis concatenation Then we extract relevant features with extra Transformer layer and followed by linear projection and ReLU where W3 in mathbb Rdk1prime times dk1 is a linear projection All biases in the formula above are omitted for simplicity Note that in doing timeaxis concatenation our Key and Value sequences are twice as long as the original input In the standard selfattention where the Query is the same as the Key and Value the output preserves the sequence length Therefore in order to maintain the necessary sequence length S we select either the first half split A or the second half split B to represent the combined information The difference between these two is that the use of split A uses the projected input features as the Query set while split B uses the projected higher level activations as the Query In initial experiments we found that the use of highlevel features split B as queries is preferable We illustrates this operation on Figure FIGREF11 Another way of combining information from the features with an intermediate layer is to concatenate the two along with the feature rather than the time axis However in initial experiments we found that time axis concatenation produces better results and focus on that in the experimental results We have found it beneficial to apply the loss function at several intermediate layers of the network Suppose there are M total layers and define a subset of these layers at which to apply the loss function K lbrace k1 k2 kLrbrace subseteq lbrace 1M1rbrace The total objective function is then defined as where Zkl is the klth Transformer layer activations Y is the groundtruth transcription for CTC and context dependent states for hybrid ASR and LossP Y can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR The coefficient lambda scales the auxiliary loss and we set lambda 03 based on our preliminary experiments We illustrate the auxiliary prediction and loss in Figure FIGREF6 We evaluate our proposed module on both the Librispeech BIBREF12 dataset and a largescale English video dataset In the Librispeech training set there are three splits containing 100 and 360 hours sets of clean speech and 500 hours of other speech We combined everything resulting in 960 hours of training data For the development set there are also two splits devclean and devother For the test set there is an analogous split The video dataset is a collection of public and anonymized English videos It consists of a 1000 hour training set a 9 hour dev set and a 461 hour test set The test set comprises an 85 hour curated set of carefully selected very clean videos a 19 hour clean set and a 186 hour noisy set BIBREF13 For the hybrid ASR experiments on video dataset alignments were generated with a production system trained with 14k hours All speech features are extracted by using log Melfilterbanks with 80 dimensions a 25 ms window size and a 10 ms time step between two windows Then we apply mean and variance normalization For CTC training we use wordpieces as our target During training the reference is tokenized to 5000 subword units using sentencepiece with a unigram language model BIBREF14 Neural networks are thus used to produce a posterior distribution for 5001 symbols 5000 subword units plus blank symbol every frame For decoding each subword is modeled by a HMM with two states where the last states share the same blank symbol probability the best subword segmentation of each word is used to form a lexicon these HMMs lexicon are then combined with the standard ngram via FST BIBREF15 to form a static decoding graph Kaldi decoderBIBREF16 is used to produce the best hypothesis We further present results with hybrid ASR systems In this we use the same HMM topology GMM bootstrapping and decision tree building procedure as BIBREF13 Specifically we use contextdependent CD graphemes as modeling units On top of alignments from a GMM model we build a decision tree to cluster CD graphemes This results in 7248 context dependent units for Librispeech and 6560 units for the video dataset Training then proceeds with the CE loss function We also apply SpecAugment BIBREF17 online during training using the LD policy without time warping For decoding a standard Kaldis WFST decoder BIBREF16 is used All neural networks are implemented with the inhouse extension of the fairseq BIBREF18 toolkit Our speech features are produced by processing the log Melspectrogram with two VGG BIBREF19 layers that have the following configurations 1 two 2D convolutions with 32 output filters kernel3 stride1 ReLU activation and maxpooling kernel2 2 two 2D convolutions with 64 output filters kernel3 stride1 and maxpooling kernel2 for CTC or maxpooling kernel1 for hybrid After the VGG layers the total number of frames are subsampled by i 4x for CTC or ii 2x for hybrid thus enabling us to reduce the runtime and memory usage significantly After VGG processing we use 24 Transformer layers with dk512 head dimensions 8 heads each head has 64 dimensions 2048 feedforward hidden dimensions total parameters pm 80 millions and dropout 015 For the proposed models we utilized an auxiliary MLP with two linear layers with 256 hidden units LeakyReLU activation and softmax see Sec SECREF3 We set our position encoding dimensions de256 and preconcatenation projection dc768 for the feature representation layer The loss function is either CTC loss or hybrid CE loss Table TABREF19 presents CTC based results for the Librispeech dataset without data augmentation Our baseline is a 24 layer Transformer network trained with CTC For the proposed method we varied the number and placement of iterated loss and the feature representation The next three results show the effect of using CTC multiple times We see 12 and 8 relative improvements for testclean and testother Adding feature representation gives a further boost with net 20 and 18 relative improvements over the baseline Table TABREF20 shows results for Librispeech with SpecAugment We test both CTC and CEhybrid systems There are consistent gains first from iterated loss and then from multiple feature presentation We also run additional CTC experiments with 36 layers Transformer total parameters pm 120 millions The baseline with 36 layers has the same performance with 24 layers but by adding the proposed methods the 36 layer performance improved to give the best results This shows that our proposed methods can improve even very deep models As shown in Table TABREF21 the proposed methods also provide large performance improvements on the curated video set up to 13 with CTC and up to 9 with the hybrid model We also observe moderate gains of between 32 and 8 relative on the clean and noisy video sets In recent years Transformer models have become an active research topic in speech processing The key features of Transformer networks is selfattention which produces comparable or better performance to LSTMs when used for encoderdecoder based ASR BIBREF23 as well as when trained with CTC BIBREF9 SpeechTransformers BIBREF24 also produce comparable performance to the LSTMbased attention model but with higher training speed in a single GPU Abdelrahman et alBIBREF8 integrates a convolution layer to capture audio context and reduces WER in Librispeech The use of an objective function in intermediate layers has been found useful in several previous works such as image classification BIBREF25 and language modeling BIBREF26 In BIBREF27 the authors did pretraining with an RNNT based model by using a hierarchical CTC criterion with different target units In this paper we dont need additional types of target unit instead we just use same tokenization and targets for both intermediate and final losses The application of the objective function to intermediate layers is also similar in spirit to the use of KLdivergence in BIBREF28 which estimates output posteriors at an intermediate layer and regularizes them towards the distributions at the final layer In contrast to this approach the direct application of the objective function does not require the network to have a good output distribution before the new gradient contribution is meaningful In this paper we have proposed a method for reprocessing the input features in light of the information available at an intermediate network layer We do this in the context of deep transformer networks via a selfattention mechanism on both features and hidden states representation To encourage meaningful partial results we calculate the objective function at intermediate layers of the network as well as the output layer This improves performance in and of itself and when combined with feature representation we observe consistent relative improvements of 10 20 for Librispeech and 32 13 for videos </s>
<s>[INST] Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge [/INST] A widely agreedon fact in language acquisition research is that learning of a second language L2 is influenced by a learners native language L1 BIBREF0 BIBREF1 A languages morphosyntax seems to be no exception to this rule BIBREF2 but the exact nature of this influence remains unknown For instance it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2s morphosyntax Within the area of natural language processing NLP research experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3 BIBREF4 BIBREF5 Often socalled probing tasks are used which require a specific subset of linguistic knowledge and can thus be leveraged for qualitative evaluation The goal is to answer the question What do neural networks learn that helps them to succeed in a given task Neural network models and specifically sequencetosequence models have pushed the state of the art for morphological inflection the task of learning a mapping from lemmata to their inflected forms in the last years BIBREF6 Thus in this work we experiment on such models asking not what they learn but motivated by the respective research on human subjects the related question of how what they learn depends on their prior knowledge We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages We aim at finding answers to two main questions i Do errors systematically differ between source languages ii Do these differences seem explainable given the properties of the source and target languages In other words we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1 ie the native language in neural network models To this goal we select a diverse set of eight source languages from different language families Basque French German Hungarian Italian Navajo Turkish and Quechua and three target languages English Spanish and Zulu We pretrain a neural sequencetosequence architecture on each of the source languages and then finetune the resulting models on small datasets in each of the target languages Analyzing the errors made by the systems we find that i source and target language being closely related simplifies the successful learning of inflection in the target language ii the task is harder to learn in a prefixing language if the source language is suffixing as well as the other way around and iii a source language which exhibits an agglutinative morphology simplifies learning of a second languages inflectional morphology Many of the worlds languages exhibit rich inflectional morphology the surface form of an individual lexical entry changes in order to express properties such as person grammatical gender or case The citation form of a lexical entry is referred to as the lemma The set of all possible surface forms or inflections of a lemma is called its paradigm Each inflection within a paradigm can be associated with a tag ie 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance We display the paradigms of dance and eat in Table TABREF1 The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity For instance while English verbs can have up to 5 inflected forms Archi verbs have thousands BIBREF7 even by a conservative count Thus an important task in the area of morphology is morphological inflection BIBREF8 BIBREF9 which consists of mapping a lemma to an indicated inflected form An irregular English example would be with PAST being the target tag denoting the past tense form Additionally a rich inflectional morphology is also challenging for L2 language learners since both rules and their exceptions need to be memorized In NLP morphological inflection has recently frequently been cast as a sequencetosequence problem where the sequence of target subtags together with the sequence of input characters constitute the input sequence and the characters of the inflected word form the output Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available Here we focus on learning of inflection from limited data if information about another languages morphology is already known We thus loosely simulate an L2 learning setting Let cal M be the paradigm slots which are being expressed in a language and w a lemma in that language We then define the paradigm pi of w as fkw denotes an inflected form corresponding to tag tk and w and fkw are strings consisting of letters from an alphabet Sigma The task of morphological inflection consists of predicting a missing form fiw from a paradigm given the lemma w together with the tag ti The models we experiment with are based on a pointergenerator network architecture BIBREF10 BIBREF11 ie a recurrent neural network RNNbased sequencetosequence network with attention and a copy mechanism A standard sequencetosequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has thus been subject to cognitively motivated experiments BIBREF14 before Here however we choose the pointergenerator variant of sharmakatrapatisharma2018K1830 since it performs better in lowresource settings which we will assume for our target languages We will explain the model shortly in the following and refer the reader to the original paper for more details Our architecture employs two separate encoders which are both bidirectional long shortterm memory LSTM networks BIBREF15 The first processes the morphological tags which describe the desired target form one by one The second encodes the sequence of characters of the input word Two separate attention mechanisms are used one per encoder LSTM Taking all respective encoder hidden states as well as the current decoder hidden state as input each of them outputs a socalled context vector which is a weighted sum of all encoder hidden states The concatenation of the two individual context vectors results in the final context vector ct which is the input to the decoder at time step t Our decoder consists of a unidirectional LSTM Unlike a standard sequencetosequence model a pointergenerator network is not limited to generating characters from the vocabulary to produce the output Instead the model gives certain probability to copying elements from the input over to the output The probability of a character yt at time step t is computed as a sum of the probability of yt given by the decoder and the probability of copying yt weighted by the probabilities of generating and copying ptextrm decyt is calculated as an LSTM update and a projection of the decoder state to the vocabulary followed by a softmax function ptextrm copyyt corresponds to the attention weights for each input character The model computes the probability alpha with which it generates a new output character as for context vector ct decoder state st embedding of the last output yt1 weights wc ws wy and bias vector b It has been shown empirically that the copy mechanism of the pointergenerator network architecture is beneficial for morphological generation in the lowresource setting BIBREF16 Pretraining and successive finetuning of neural network models is a common approach for handling of lowresource settings in NLP The idea is that certain properties of language can be learned either from raw text related tasks or related languages Technically pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task Finetuning refers to continuing training of such a model on a target task whose data is often limited While the sizes of the pretrained model parameters usually remain the same between the two phases the learning rate or other details of the training regime eg dropout might differ Pretraining can be seen as finding a suitable initialization of model parameters before training on limited amounts of task or languagespecific examples In the context of morphological generation pretraining in combination with finetuning has been used by kannschutze2018neural which proposes to pretrain a model on general inflection data and finetune on examples from a specific paradigm whose remaining forms should be automatically generated Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT2 BIBREF18 there general properties of language are learned using large unlabeled corpora Here we are interested in pretraining as a simulation of familiarity with a native language By investigating a finetuned model we ask the question How does extensive knowledge of one language influence the acquisition of another We choose three target languages English ENG is a morphologically impoverished language as far as inflectional morphology is concerned Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2 However it is one of the most frequently spoken and taught languages in the world making its acquisition a crucial research topic Spanish SPA in contrast is morphologically rich and disposes of much larger verbal paradigms than English Like English it is a suffixing language and it additionally makes use of internal stem changes eg o rightarrow ue Since English and Spanish are both IndoEuropean languages and thus relatively similar we further add a third unrelated target language We choose Zulu ZUL a Bantoid language In contrast to the first two it is strongly prefixing For pretraining we choose languages with different degrees of relatedness and varying morphological similarity to English Spanish and Zulu We limit our experiments to languages which are written in Latin script As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures WALS An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13 We decide on Basque EUS French FRA German DEU Hungarian HUN Italian ITA Navajo NAV Turkish TUR and Quechua QVH as source languages Basque is a language isolate Its inflectional morphology makes similarly frequent use of prefixes and suffixes with suffixes mostly being attached to nouns while prefixes and suffixes can both be employed for verbal inflection French and Italian are Romance languages and thus belong to the same family as the target language Spanish Both are suffixing and fusional languages German like English belongs to the Germanic language family It is a fusional predominantly suffixing language and similarly to Spanish makes use of stem changes Hungarian a FinnoUgric language and Turkish a Turkic language both exhibit an agglutinative morphology and are predominantly suffixing They further have vowel harmony systems Navajo is an Athabaskan language and the only source language which is strongly prefixing It further exhibits consonant harmony among its sibilants BIBREF19 BIBREF20 Finally Quechua a Quechuan language spoken in South America is again predominantly suffixing and unrelated to all of our target languages We mostly use the default hyperparameters by sharmakatrapatisharma2018K1830 In particular all RNNs have one hidden layer of size 100 and all input and output embeddings are 300dimensional For optimization we use ADAM BIBREF21 Pretraining on the source language is done for exactly 50 epochs To obtain our final models we then finetune different copies of each pretrained model for 300 additional epochs for each target language We employ dropout BIBREF22 with a coefficient of 03 for pretraining and since that dataset is smaller with a coefficient of 05 for finetuning We make use of the datasets from the CoNLLSIGMORPHON 2018 shared task BIBREF9 The organizers provided a low medium and high setting for each language with 100 1000 and 10000 examples respectively For all L1 languages we train our models on the highresource datasets with 10000 examples For finetuning we use the lowresource datasets In Table TABREF18 we show the final test accuracy for all models and languages Pretraining on EUS and NAV results in the weakest target language inflection models for ENG which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing while ENG is a suffixing language cf Table TABREF13 In contrast HUN and ITA yield the best final models for ENG This is surprising since DEU is the language in our experiments which is closest related to ENG For SPA again HUN performs best followed closely by ITA While the good performance of HUN as a source language is still unexpected ITA is closely related to SPA which could explain the high accuracy of the final model As for ENG pretraining on EUS and NAV yields the worst final models importantly accuracy is over 15 lower than for QVH which is also an unrelated language This again suggests that the prefixing morphology of EUS and NAV might play a role Lastly for ZUL all models perform rather poorly with a minimum accuracy of 107 and 108 for the source languages QVH and EUS respectively and a maximum accuracy of 249 for a model pretrained on Turkish The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language something which could also account for the performance of models pretrained on HUN For our qualitative analysis we make use of the validation set Therefore we show validation set accuracies in Table TABREF19 for comparison As we can see the results are similar to the test set results for all language combinations We manually annotate the outputs for the first 75 development examples for each sourcetarget language combination All found errors are categorized as belonging to one of the following categories SUBX This error consists of a wrong substitution of one character with another SUBV and SUBC denote this happening with a vowel or a consonant respectively Letters that differ from each other by an accent count as different vowels Example decultared instead of decultured DELX This happens when the system ommits a letter from the output DELV and DELC refer to a missing vowel or consonant respectively Example firte instead of firtle NOCHGX This error occurs when inflecting the lemma to the gold form requires a change of either a vowel NOCHGV or a consonant NOCHGC but this is missing in the predicted form Example verto instead of vierto MULT This describes cases where two or more errors occur in the stem Errors concerning the affix are counted for separately Example aconcoonaste instead of acondicionaste ADDX This error occurs when a letter is mistakenly added to the inflected form ADDV refers to an unnecessary vowel ADDC refers to an unnecessary consonant Example compillan instead of compilan CHG2EX This error occurs when inflecting the lemma to the gold form requires a change of either a vowel CHG2EV or a consonant CHG2EC and this is done but the resulting vowel or consonant is incorrect Example propace instead of propague AFF This error refers to a wrong affix This can be either a prefix or a suffix depending on the correct target form Example ezoJulayi instead of esikaJulayi CUT This consists of cutting too much of the lemmas prefix or suffix before attaching the inflected forms prefix or suffix respectively Example irradiseis instead of irradiaseis REFL This happens when a reflective pronoun is missing in the generated form Example doliramos instead of nos doliramos REFLLOC This error occurs if the reflective pronouns appears at an unexpected position within the generated form Example taparsebais instead of os tapabais OVERREG Overregularization errors occur when the model predicts a form which would be correct if the lemmas inflections were regular but they are not Example underteach instead of undertaught Table TABREF35 displays the errors found in the 75 first ENG development examples for each source language From Table TABREF19 we know that HUN ITA TUR DEU FRA QVH NAV EUS and we get a similar picture when analyzing the first examples Thus especially keeping HUN and TUR in mind we cautiously propose a first conclusion familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new languages morphology Looking at the types of errors we find that EUS and NAV make the most stem errors For QVH we find less but still over 10 more than for the remaining languages This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology and in particular to copy the stem correctly Thus our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language DELX and ADDX errors are particularly frequent for EUS and NAV which further suggests this conclusion Next the relatively large amount of stem errors for QVH leads to our second hypothesis language relatedness does play a role when trying to produce a correct stem of an inflected form This is also implied by the number of MULT errors for EUS NAV and QVH as compared to the other languages Considering errors related to the affixes which have to be generated we find that DEU HUN and ITA make the fewest This further suggests the conclusion that especially since DEU is the language which is closest related to ENG language relatedness plays a role for producing suffixes of inflected forms as well Our last observation is that many errors are not found at all in our data sample eg CHG2EX or NOCHGC This can be explained by ENG having a relatively poor inflectional morphology which does not leave much room for mistakes The errors committed for SPA are shown in Table TABREF37 again listed by source language Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG systems for all source languages perform worse Similarly to ENG however we find that most stem errors happen for the source languages EUS and NAV which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one Especially MULT errors are much more frequent for EUS and NAV than for all other languages ADDX happens a lot for EUS while ADDC is also frequent for NAV Models pretrained on either language have difficulties with vowel changes which reflects in NOCHGV Thus we conclude that this phenomenon is generally hard to learn Analyzing next the errors concerning affixes we find that models pretrained on HUN ITA DEU and FRA in that order commit the fewest errors This supports two of our previous hypotheses First given that ITA and FRA are both from the same language family as SPA relatedness seems to be benficial for learning of the second language Second the system pretrained on HUN performing well suggests again that a source language with an agglutinative as opposed to a fusional morphology seems to be beneficial as well In Table TABREF39 the errors for Zulu are shown and Table TABREF19 reveals the relative performance for different source languages TUR HUN DEU ITA FRA NAV EUS QVH Again TUR and HUN obtain high accuracy which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language Besides that results differ from those for ENG and SPA First of all more mistakes are made for all source languages However there are also several finer differences For ZUL the model pretrained on QVH makes the most stem errors in particular 4 more than the EUS model which comes second Given that ZUL is a prefixing language and QVH is suffixing this relative order seems important QVH also committs the highest number of MULT errors The next big difference between the results for ZUL and those for ENG and SPA is that DELX and ADDX errors which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV are now most present in the outputs of suffixing languages Namely DELC occurs most for FRA and ITA DELV for FRA and QVH and ADDC and ADDV for HUN While some deletion and insertion errors are subsumed in MULT this does not fully explain this difference For instance QVH has both the second most DELV and the most MULT errors The overall number of errors related to the affix seems comparable between models with different source languages This weakly supports the hypothesis that relatedness reduces affixrelated errors since none of the pretraining languages in our experiments is particularly close to ZUL However we do find more CUT errors for HUN and TUR again these are suffixing while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV A limitation of our work is that we only include languages that are written in Latin script An interesting question for future work might thus regard the effect of disjoint L1 and L2 alphabets Furthermore none of the languages included in our study exhibits a templatic morphology We make this choice because data for templatic languages is currently mostly available in nonLatin alphabets Future work could investigate languages with templatic morphology as source or target languages if needed by mapping the languages alphabet to Latin characters Finally while we intend to choose a diverse set of languages for this study our overall number of languages is still rather small This affects the generalizability of the results and future work might want to look at larger samples of languages Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLLSIGMORPHON shared tasks on morphological inflection which have been organized yearly since 2016 BIBREF6 Traditionally being focused on individual languages the 2019 edition BIBREF23 contained a task which asked for transfer learning from a highresource to a lowresource language However sourcetarget pairs were predefined and the question of how the source language influences learning besides the final accuracy score was not considered Similarly to us kyle performed a manual error analysis of morphological inflection systems for multiple languages However they did not investigate transfer learning but focused on monolingual models Outside the scope of the shared tasks kannetal2017one investigated crosslingual transfer for morphological inflection but was limited to a quantitative analysis Furthermore that work experimented with a standard sequencetosequence model BIBREF12 in a multitask training fashion BIBREF24 while we pretrain and finetune pointergenerator networks jinkann2017exploring also investigated crosslingual transfer in neural sequencetosequence models for morphological inflection However their experimental setup mimicked kannetal2017one and the main research questions were different While jinkann2017exploring asked how crosslingual knowledge transfer works during multitask training of neural sequencetosequence models on two languages we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language Besides that we differ in the artificial neural network architecture and language pairs we investigate Crosslingual transfer learning has been used for a large variety NLP of tasks eg automatic speech recognition BIBREF25 entity recognition BIBREF26 language modeling BIBREF27 or parsing BIBREF28 BIBREF29 BIBREF30 Machine translation has been no exception BIBREF31 BIBREF32 BIBREF33 Recent research asked how to automatically select a suitable source language for a given target language BIBREF34 This is similar to our work in that our findings could potentially be leveraged to find good source languages Finally a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35 BIBREF36 BIBREF37 BIBREF38 BIBREF39 BIBREF40 To name some specific examples marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition including learning of its verbal morphology in English speakers jia2003acquisition studied how Mandarin Chinesespeaking children learned the English plural morpheme nicoladis2012young studied the English past tense acquisition in ChineseEnglish and FrenchEnglish bilingual children They found that while both groups showed similar production accuracy they differed slightly in the type of errors they made Also considering the effect of the native language explicitly yang2004impact investigated the acquisition of the tenseaspect system in an L2 for speakers of a native language which does not mark tense explicitly Finally our work has been weakly motivated by bliss2006l2 There the author asked a question for human subjects which is similar to the one we ask for neural models How does the native language influence L2 acquisition of inflectional morphology Motivated by the fact that in humans learning of a second language is influenced by a learners native language we investigated a similar question in artificial neural network models for morphological inflection How does pretraining on different languages influence a models learning of inflection in a target language We performed experiments on eight different source languages and three different target languages An extensive error analysis of all final models showed that i for closely related source and target languages acquisition of target language inflection gets easier ii knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging as well as the other way around and iii languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language Future work might leverage those findings to improve neural network models for morphological inflection in lowresource languages by choosing suitable source languages for pretraining Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages If the exhibited patterns resemble each other computational models could be used to predict errors a person will make which in turn could be leveraged for further research or the development of educational material I would like to thank Samuel R Bowman and Kyle Gorman for helpful discussions and suggestions This work has benefited from the support of Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation </s>
<s>[INST] How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages [/INST] The Cambridge Handbook of Endangered Languages BIBREF3 estimates that at least half of the 7000 languages currently spoken worldwide will no longer exist by the end of this century For these endangered languages data collection campaigns have to accommodate the challenge that many of them are from oral tradition and producing transcriptions is costly This transcription bottleneck problem can be handled by translating into a widely spoken language to ensure subsequent interpretability of the collected recordings and such parallel corpora have been recently created by aligning the collected audio with translations in a wellresourced language BIBREF1 BIBREF2 BIBREF4 Moreover some linguists suggested that more than one translation should be collected to capture deeper layers of meaning BIBREF5 This work is a contribution to the Computational Language Documentation CLD research field that aims to replace part of the manual steps performed by linguists during language documentation initiatives by automatic approaches Here we investigate the unsupervised word discovery and segmentation task using the bilingualrooted approach from BIBREF6 There words in the wellresourced language are aligned to unsegmented phonemes in the endangered language in order to identify group of phonemes and to cluster them into wordlike units We experiment with the MboshiFrench parallel corpus translating the French text into four other wellresourced languages in order to investigate language impact in this CLD approach Our results hint that this language impact exists and that models based on different languages will output different wordlike units In this work we extend the bilingual MboshiFrench parallel corpus BIBREF2 fruit of the documentation process of Mboshi Bantu C25 an endangered language spoken in CongoBrazzaville The corpus contains 5130 utterances for which it provides audio transcriptions and translations in French We translate the French into four other wellresourced languages through the use of the DeepL translator The languages added to the dataset are English German Portuguese and Spanish Table shows some statistics for the produced Multilingual Mboshi parallel corpus We use the bilingual neuralbased Unsupervised Word Segmentation UWS approach from BIBREF6 to discover words in Mboshi In this approach Neural Machine Translation NMT models are trained between language pairs using as source language the translation wordlevel and as target the language to document unsegmented phonemic sequence Due to the attention mechanism present in these networks BIBREF7 posterior to training it is possible to retrieve softalignment probability matrices between source and target sequences These matrices give us sentencelevel sourcetotarget alignment information and by using it for clustering neighbor phonemes aligned to the same translation word we are able to create segmentation in the target side The product of this approach is a set of discoveredunits translation words pairs In this work we apply two simple methods for including multilingual information into the bilingual models from BIBREF6 The first one Multilingual Voting consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries The voting is performed by applying an agreement threshold T over the output boundaries This threshold balances between accepting all boundaries from all the bilingual models zero agreement and accepting only input boundaries discovered by all these models total agreement The second method is ANE Selection For every language pair and aligned sentence in the dataset a softalignment probability matrix is generated We use Average Normalized Entropy ANE BIBREF8 computed over these matrices for selecting the most confident one for segmenting each phoneme sequence This exploits the idea that models trained on different language pairs will have languagerelated behavior thus differing on the resulting alignment and segmentation over the same phoneme sequence The experiment settings from this paper and evaluation protocol for the Mboshi corpus Boundary Fscores using the ZRC speech reference are the same from BIBREF8 Table presents the results for bilingual UWS and multilingual leveraging For the former we reach our best result by using as aligned information the French the original aligned language for this dataset Languages closely related to French Spanish and Portuguese ranked better while our worst result used German English also performs notably well in our experiments We believe this is due to the statistics features of the resulting text We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages Since we train our systems in very lowresource settings vocabularyrelated features can impact greatly the systems capacity to languagemodel and consequently the final quality of the produced alignments Even in highresource settings it was already attested that some languages are more difficult to model than others BIBREF9 For the multilingual selection experiments we experimented combining the languages from top to bottom as they appear Table ranked by performance eg 13 means the combination of FR1 EN2 and PT3 We observe that the performance improvement is smaller than the one observed in previous work BIBREF10 which we attribute to the fact that our dataset was artificially augmented This could result in the available multilingual form of supervision not being as rich as in a manually generated dataset Finally the best boundary segmentation result is obtained by performing multilingual voting with all the languages and an agreement of 50 which indicates that the information learned by different languages will provide additional complementary evidence Lastly following the methodology from BIBREF8 we extract the most confident alignments in terms of ANE discovered by the bilingual models Table presents the top 10 most confident discovered type translation pairs Looking at the pairs the bilingual models are most confident about we observe there are some types discovered by all the bilingual models eg Mboshi word itua and the concatenation obong However the models still differ for most of their alignments in the table This hints that while a portion of the lexicon might be captured independently of the language used other structures might be more dependent of the chosen language On this note BIBREF11 suggests the notion of word cannot always be meaningfully defined crosslinguistically In this work we train bilingual UWS models using the endangered language Mboshi as target and different wellresourced languages as aligned information Results show that similar languages rank better in terms of segmentation performance and that by combining the information learned by different models segmentation is further improved This might be due to the different languagedependent structures that are captured by using more than one language Lastly we extend the bilingual MboshiFrench parallel corpus creating a multilingual corpus for the endangered language Mboshi that we make available to the community </s>
<s>[INST] Dense Information Flow for Neural Machine Translation [/INST] Neural machine translation NMT is a challenging task that attracts lots of attention in recent years Starting from the encoderdecoder framework BIBREF0 NMT starts to show promising results in many language pairs The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable The attention mechanism BIBREF1 added on top of encoderdecoder framework is shown to be very useful to automatically find alignment structure and singlelayer RNNbased structure has evolved into deeper models with more efficient transformation functions BIBREF2 BIBREF3 BIBREF4 One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages From the optimization perspective deeper models are hard to efficiently backpropagate the gradients and this phenomenon as well as its solution is better explored in the computer vision society Residual networks ResNet BIBREF5 achieve great performance in a wide range of tasks including image classification and image segmentation Residual connections allow features from previous layers to be accumulated to the next layer easily and make the optimization of the model efficiently focus on refining upper layer features NMT is considered as a challenging problem due to its sequencetosequence generation framework and the goal of comprehension and reorganizing from one language to the other Apart from the encoder block that works as a feature generator the decoder network combining with the attention mechanism bring new challenges to the optimization of the models While nowadays bestperforming NMT systems use residual connections we question whether this is the most efficient way to propagate information through deep models In this paper inspired by the idea of using dense connections for training computer vision tasks BIBREF6 we propose a densely connected NMT framework DenseNMT that efficiently propagates information from the encoder to the decoder through the attention component Taking the CNNbased deep architecture as an example we verify the efficiency of DenseNMT Our contributions in this work include i by comparing the loss curve we show that DenseNMT allows the model to pass information more efficiently and speeds up training ii we show through ablation study that dense connections in all three blocks altogether help improve the performance while not increasing the number of parameters iii DenseNMT allows the models to achieve similar performance with much smaller embedding size iv DenseNMT on IWSLT14 GermanEnglish and TurkishEnglish translation tasks achieves new benchmark BLEU scores and the result on WMT14 EnglishGerman task is more competitive than the residual connections based baseline model In this section we introduce our DenseNMT architecture In general compared with residual connected NMT models DenseNMT allows each layer to provide its information to all subsequent layers directly Figure FIGREF9 FIGREF15 show the design of our model structure by parts We start with the formulation of a regular NMT model Given a set of sentence pairs INLINEFORM0 an NMT model learns parameter INLINEFORM1 by maximizing the loglikelihood function DISPLAYFORM0 For every sentence pair INLINEFORM0 INLINEFORM1 is calculated based on the decomposition DISPLAYFORM0 where INLINEFORM0 is the length of sentence INLINEFORM1 Typically NMT models use the encoderattentiondecoder framework BIBREF1 and potentially use multilayer structure for both encoder and decoder Given a source sentence INLINEFORM2 with length INLINEFORM3 the encoder calculates hidden representations by layer We denote the representation in the INLINEFORM4 th layer as INLINEFORM5 with dimension INLINEFORM6 where INLINEFORM7 is the dimension of features in layer INLINEFORM8 The hidden representation at each position INLINEFORM9 is either calculated by DISPLAYFORM0 for recurrent transformation INLINEFORM0 such as LSTM and GRU or by DISPLAYFORM0 for parallel transformation INLINEFORM0 On the other hand the decoder layers INLINEFORM1 follow similar structure while getting extra representations from the encoder side These extra representations are also called attention and are especially useful for capturing alignment information In our experiments we use convolution based transformation for INLINEFORM0 due to both its efficiency and high performance more formally DISPLAYFORM0 INLINEFORM0 is the gated linear unit proposed in BIBREF11 and the kernel size is INLINEFORM1 DenseNMT is agnostic to the transformation function and we expect it to also work well combining with other transformations such as LSTM selfattention and depthwise separable convolution Different from residual connections later layers in the dense encoder are able to use features from all previous layers by concatenating them DISPLAYFORM0 Here INLINEFORM0 is defined in Eq EQREF10 INLINEFORM1 represents concatenation operation Although this brings extra connections to the network with smaller number of features per layer the architecture encourages feature reuse and can be more compact and expressive As shown in Figure FIGREF9 when designing the model the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residualconnected model While each encoder layer perceives information from its previous layers each decoder layer INLINEFORM0 has two information sources previous layers INLINEFORM1 and attention values INLINEFORM2 Therefore in order to allow dense information flow we redefine the generation of INLINEFORM3 th layer as a nonlinear function over all its previous decoder layers and previous attentions This can be written as DISPLAYFORM0 where INLINEFORM0 is the attention value using INLINEFORM1 th decoder layer and information from encoder side which will be specified later Figure FIGREF13 shows the comparison of a dense decoder with a regular residual decoder The dimensions of both attention values and hidden layers are chosen with smaller values yet the perceived information for each layer consists of a higher dimension vector with more representation power The output of the decoder is a linear transformation of the concatenation of all layers by default To compromise to the increment of dimensions we use summary layers which will be introduced in Section 33 With summary layers the output of the decoder is only a linear transformation of the concatenation of the upper few layers Prior works show a trend of designing more expressive attention mechanisms as discussed in Section 2 However most of them only use the last encoder layer In order to pass more abundant information from the encoder side to the decoder side the attention block needs to be more expressive Following the recent development of designing attention architectures we propose DenseAtt as the dense attention block which serves for the dense connection between the encoder and the decoder side More specifically two options are proposed accordingly For each decoding step in the corresponding decoder layer the two options both calculate attention using multiple encoder layers The first option is more compressed while the second option is more expressive and flexible We name them as DenseAtt1 and DenseAtt2 respectively Figure FIGREF15 shows the architecture of a multistep attention BIBREF2 b DenseAtt1 and c DenseAtt2 in order In general a popular multiplicative attention module can be written as DISPLAYFORM0 where INLINEFORM0 represent query key value respectively We will use this function INLINEFORM1 in the following descriptions In the decoding phase we use a layerwise attention mechanism such that each decoder layer absorbs different attention information to adjust its output Instead of treating the last hidden layer as the encoders output we treat the concatenation of all hidden layers from encoder side as the output The decoder layer multiplies with the encoder output to obtain the attention weights which is then multiplied by a linear combination of the encoder output and the sentence embedding The attention output of each layer INLINEFORM0 can be formally written as DISPLAYFORM0 where INLINEFORM0 is the multiplicative attention function INLINEFORM1 is a concatenation operation that combines all features and INLINEFORM2 is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value Notice that we explicitly write the INLINEFORM3 term in EQREF19 to keep consistent with the multistep attention mechanism as pictorially shown in Figure FIGREF15 a Notice that the transformation INLINEFORM0 in DenseAtt1 forces the encoder layers to be mixed before doing attention Since we use multiple hidden layers from the encoder side to get an attention value we can alternatively calculate multiple attention values before concatenating them In another word the decoder layer can get different attention values from different encoder layers This can be formally expressed as DISPLAYFORM0 where the only difference from Eq EQREF19 is that the concatenation operation is substituted by a summation operation and is put after the attention function INLINEFORM0 This method further increases the representation power in the attention block while maintaining the same number of parameters in the model Since the number of features fed into nonlinear operation is accumulated along the path the parameter size increases accordingly For example for the INLINEFORM0 th encoder layer the input dimension of features is INLINEFORM1 where INLINEFORM2 is the feature dimension in previous layers INLINEFORM3 is the embedding size In order to avoid the calculation bottleneck for later layers due to large INLINEFORM4 we introduce the summary layer for deeper models It summarizes the features for all previous layers and projects back to the embedding size so that later layers of both the encoder and the decoder side do not need to look back further The summary layers can be considered as contextualized word vectors in a given sentence BIBREF12 We add one summary layer after every INLINEFORM5 layers where INLINEFORM6 is the hyperparameter we introduce Accordingly the input dimension of features is at most INLINEFORM7 for the last layer of the encoder Moreover combined with the summary layer setting our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers which consists of the last contextual embedding layer and several dense connected layers with low dimension In practice we set INLINEFORM8 as 5 or 6 Figure FIGREF9 and Figure FIGREF13 show the difference of information flow compared with a residualbased encoderdecoder For residualbased models each layer can absorb a single highdimensional vector from its previous layer as the only information while for DenseNMT each layer can utilize several lowdimensional vectors from its previous layers and a highdimensional vector from the first layer embedding layer as its information In DenseNMT each layer directly provides information to its later layers Therefore the structure allows feature reuse and encourages upper layers to focus on creating new features Furthermore the attention block allows the embedding vectors as well as other hidden layers to guide the decoders generation more directly therefore during backpropagation the gradient information can be passed directly to all encoder layers simultaneously We use three datasets for our experiments IWSLT14 GermanEnglish TurkishEnglish and WMT14 EnglishGerman We preprocess the IWSLT14 GermanEnglish dataset following bytepairencoding BPE method BIBREF13 We learn 25k BPE codes using the joint corpus of source and target languages We randomly select 7k from IWSLT14 GermanEnglish as the development set and the test set is a concatenation of dev2010 tst2010 tst2011 and tst2012 which is widely used in prior works BIBREF14 BIBREF15 BIBREF16 For the TurkishEnglish translation task we use the data provided by IWSLT14 BIBREF17 and the SETimes corpus BIBREF17 following BIBREF18 After removing sentence pairs with length ratio over 9 we obtain 360k sentence pairs Since there is little commonality between the two languages we learn 30k size BPE codes separately for Turkish and English In addition to this we give another preprocessing for Turkish sentences and use wordlevel English corpus For Turkish sentences following BIBREF19 BIBREF18 we use the morphology tool Zemberek with disambiguation by the morphological analysis BIBREF20 and removal of nonsurface tokens Following BIBREF18 we concatenate tst2011 tst2012 tst2013 tst2014 as our test set We concatenate dev2010 and tst2010 as the development set We preprocess the WMT14 EnglishGerman dataset using a BPE code size of 40k We use the concatenation of newstest2013 and newstest2012 as the development set As the baseline model BASE4L for IWSLT14 GermanEnglish and TurkishEnglish we use a 4layer encoder 4layer decoder residualconnected model with embedding and hidden size set as 256 by default As a comparison we design a densely connected model with same number of layers but the hidden size is set as 128 in order to keep the model size consistent The models adopting DenseAtt1 DenseAtt2 are named as DenseNMT4L1 and DenseNMT4L2 respectively In order to check the effect of dense connections on deeper models we also construct a series of 8layer models We set the hidden number to be 192 such that both 4layer models and 8layer models have similar number of parameters For dense structured models we set the dimension of hidden states to be 96 Since NMT model usually allocates a large proportion of its parameters to the sourcetarget sentence embedding and softmax matrix we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score We change the dimensions of the source embedding the target embedding as well as the softmax matrix simultaneously to smaller values and then project each word back to the original embedding dimension through a linear transformation This significantly reduces the number of total parameters while not influencing the upper layer structure of the model We also introduce three additional models we use for ablation study all using 4layer structure Based on the residual connected BASE4L model 1 DenseENC4L only makes encoder side dense 2 DenseDEC4L only makes decoder side dense and 3 DenseAtt4L only makes the attention dense using DenseAtt2 There is no summary layer in the models and both DenseENC4L and DenseDEC4L use hidden size 128 Again by reducing the hidden size we ensure that different 4layer models have similar model sizes Our design for the WMT14 EnglishGerman model follows the best performance model provided in BIBREF2 The construction of our model is straightforward our 15layer model DenseNMTEnDe15 uses dense connection with DenseAtt2 INLINEFORM0 The hidden number in each layer is INLINEFORM1 that of the original model while the kernel size maintains the same We use Nesterov Accelerated Gradient NAG BIBREF21 as our optimizer and the initial learning rate is set to INLINEFORM0 For GermanEnglish and TurkishEnglish experiments the learning rate will shrink by 10 every time the validation loss increases For the EnglishGerman dataset in consistent with BIBREF2 the learning rate will shrink by 10 every epoch since the first increment of validation loss The system stops training until the learning rate is less than INLINEFORM1 All models are trained endtoend without any warmstart techniques We set our batch size for the WMT14 EnglishGerman dataset to be 48 and additionally tune the length penalty parameter in consistent with BIBREF2 For other datasets we set batch size to be 32 During inference we use a beam size of 5 We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve All hyperparameters are fixed in each plot only the models are different In Figure FIGREF30 the loss curves for both training and dev sets before entering the finetuning period are provided for DeEn TrEn and TrEnmorph For clarity we compare DenseNMT4L2 with BASE4L We observe that DenseNMT models are consistently better than residualconnected models since their loss curves are always below those of the baseline models The effect is more obvious on the WMT14 EnglishGerman dataset We rerun the best model provided by BIBREF2 and compare with our model In Figure FIGREF33 where traintest loss curve are provided DenseNMTEnDe15 reaches the same level of loss and starts finetuning validation loss starts to increase at epoch 13 which is 35 faster than the baseline Adding dense connections changes the architecture and would slightly influence training speed For the WMT14 EnDe experiments the computing time for both DenseNMT and the baseline with similar number of parameters and same batch size tested on single M40 GPU card are 1571 and 1710 words respectively While adding dense connections influences the periteration training slightly 81 reduction of speed it uses many fewer epochs and achieves a better BLEU score In terms of training time DenseNMT uses 293before finetuning229total less time than the baseline Table TABREF32 shows the results for DeEn TrEn TrEnmorph datasets where the best accuracy for models with the same depth and of similar sizes are marked in boldface In almost all genres DenseNMT models are significantly better than the baselines With embedding size 256 where all models achieve their best scores DenseNMT outperforms baselines by 0710 BLEU on DeEn 0513 BLEU on TrEn 0815 BLEU on TrEnmorph We observe significant gain using other embedding sizes as well Furthermore in Table TABREF36 we investigate DenseNMT models through ablation study In order to make the comparison fair six models listed have roughly the same number of parameters On DeEn TrEn and TrEnmorph we see improvement by making the encoder dense making the decoder dense and making the attention dense Fully denseconnected model DenseNMT4L1 further improves the translation accuracy By allowing more flexibility in dense attention DenseNMT4L2 provides the highest BLEU scores for all three experiments From the experiments we have seen that enlarging the information flow in the attention block benefits the models The dense attention block provides multilayer information transmission from the encoder to the decoder and to the output as well Meanwhile as shown by the ablation study the denseconnected encoder and decoder both give more powerful representations than the residualconnected counterparts As a result the integration of the three parts improve the accuracy significantly From Table TABREF32 we also observe that DenseNMT performs better with small embedding sizes compared to residualconnected models with regular embedding size For example on TrEn model the 8layer DenseNMT8L2 model with embedding size 64 matches the BLEU score of the 8layer BASE model with embedding size 256 while the number of parameter of the former one is only INLINEFORM0 of the later one In all genres DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256 While overlarge embedding sizes hurt accuracy because of overfitting issues smaller sizes are not preferable because of insufficient representation power However our dense models show that with better model design the embedding information can be well concentrated on fewer dimensions eg 64 This is extremely helpful when building models on mobile and small devices where the model size is critical While there are other works that stress the efficiency issue by using techniques such as separable convolution BIBREF3 and shared embedding BIBREF4 our DenseNMT framework is orthogonal to those approaches We believe that other techniques would produce more efficient models through combining with our DenseNMT framework For the IWSLT14 GermanEnglish dataset we compare with the best results reported from literatures To be consistent with prior works we also provide results using our model directly on the dataset without BPE preprocessing As shown in Table TABREF39 DenseNMT outperforms the phrasestructure based network NPMT BIBREF16 with beam size 10 by 12 BLEU using a smaller beam size and outperforms the actorcritic method based algorithm BIBREF15 by 28 BLEU For reference our model trained on the BPE preprocessed dataset achieves 3226 BLEU which is 193 BLEU higher than our wordbased model For TurkishEnglish task we compare with BIBREF19 which uses the same morphology preprocessing as our TrEnmorph As shown in Table TABREF37 our baseline is higher than the previous result and we further achieve new benchmark result with 2436 BLEU average score For WMT14 EnglishGerman from Table TABREF41 we can see that DenseNMT outperforms ConvS2S model by 036 BLEU score using 35 fewer training iterations and 20 fewer parameters We also compare with another convolution based NMT model SliceNet BIBREF3 which explores depthwise separable convolution architectures SliceNetFull matches our result and SliceNetSuper outperforms by 058 BLEU score However both models have 22x more parameters than our model We expect DenseNMT structure could help improve their performance as well In this work we have proposed DenseNMT as a denseconnection framework for translation tasks which uses the information from embeddings more efficiently and passes abundant information from the encoder side to the decoder side Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy For the future work we will combine dense connections with other deep architectures such as RNNs BIBREF7 and selfattention networks BIBREF4 </s>
<s>[INST] Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text [/INST] Lists are extremely common in text and speech and the ordering of items in a list can often reveal information For instance orderings can denote relative importance such as on a todo list or signal status as is the case for author lists of scholarly publications In other cases orderings might come from cultural or historical conventions For example red white and blue is a specific ordering of colors that is recognizable to those familiar with American culture The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century By far the most frequently studied aspect of list ordering is the binomial a list of two words usually separated by a conjunction such as and or or which is the focus of our paper The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0 who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words In the case of a binomial consisting of a monosyllable and a disyllable the prediction was that the monosyllable would appear first followed by the conjunction and The idea was that this would give a much more standard and familiar syllable stress to the overall phrase eg the binomial bread and butter would have the preferable rhythm compared to butter and bread This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering Binomials like this that appear in strictly one order perhaps within the confines of some text corpus are commonly termed frozen binomials BIBREF1 BIBREF2 Examples of frozen binomials include salt and pepper and pros and cons and explanations for their ordering in English and other languages have become increasingly complex Early work focused almost exclusively on common frozen binomials often drawn from everyday speech More recent work has expanded this view to include nearly frozen binomials binomials from large data sets such as books and binomials of particular types such as food names and descriptors BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 Additionally explanations have increasingly focused on meaning rather than just sound implying value systems inherent to the speaker or the culture of the languages speakers one such example is that men are usually listed before women in English BIBREF9 The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3 Thus it is possible that understanding these orderings could reveal biases or values held by the speaker Overall this prior research has largely been confined to pristine examples often relying on small samples of lists to form conclusions Many early studies simply drew a small sample of what the authors considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10 BIBREF1 BIBREF11 BIBREF0 BIBREF12 BIBREF13 BIBREF14 BIBREF15 BIBREF3 Other researchers have used books or news articles BIBREF2 BIBREF4 or small samples from the Web web search results and Google books BIBREF5 Many of these have lacked a largescale text corpus and have relied on a focused set of statistics about word orderings Thus despite the long history of this line of inquiry there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people Such an analysis could produce at least two types of benefits First such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time Second such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context The present work Binomials in largescale online text In this work we use data from largescale Internet text corpora to study binomials at a massive scale drawing on text created by millions of users Our approach is more wholesale than prior work we focus on all binomials of sufficient frequency without first restricting to small samples of binomials that might be frozen We draw our data from news publications wine reviews and Reddit which in addition to large volume also let us characterize binomials in new ways and analyze differences in binomial orderings across communities and over time Furthermore the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names a key setting for word ordering which has been difficult to study by other means We begin our analysis by introducing several new key measures for the study of binomials including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering By looking at the distribution of asymmetries across a wide range of binomials we find that most binomials are not frozen barring a few strong exceptions At the same time there may still be an ordering preference For example 10 and 20 is not a frozen binomial instead the binomial ordering 10 and 20 appears 60 of the time and 20 and 10 appears 40 of time We also address temporal and community structure in collections of binomials While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5 BIBREF10 BIBREF1 BIBREF13 BIBREF14 BIBREF15 there has been little analysis of this change We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time Using subreddits as communities these metrics reveal variations in orderings some of which suggest cultural change influencing language For example in one community we find that over a period of 10 years the binomial son and daughter went from nearly frozen to appearing in that order only 64 of the time While these changes do happen they are generally quite rare Most binomials frozen or not are ordered in one way about the same percentage of the time regardless of community or the year We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time if binomial orderings were randomly ordered according to global asymmetry values We find that there is less variation across time and communities in the data compared to this model implying that binomial orderings are indeed remarkably stable Given this stability one might expect that the dominant ordinality of a given binomial is still predictable even if the binomial is not frozen For example one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases However we find that these simple predictors are quite poor at determining binomial ordering On the other hand we find that a notion of proximity is robust at predicting ordering in some cases Here the idea is that the person producing the text will list the word that is conceptually closer to them first a phenomenon related to a Me First principle of binomial orderings suggested by Cooper and Ross BIBREF3 One way in which we study this notion of proximity is through sports team subreddits For example we find that when two NBA team names form a binomial on a specific teams subreddit the team that is the subject of the subreddit tends to appear first The other source of improved predictions comes from using word embeddings BIBREF16 we find that a model based on the positions of words in a standard pretrained word embedding can be a remarkably reliable predictor of binomial orderings While not applicable to all words such as names this type of model is strongly predictive in most cases Since binomial orderings are in general difficult to predict individually we explore a new way of representing the global binomial ordering structure we form a directed graph where an edge from i to j means that i tends to come before j in binomials These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities For instance in a graph formed from the binomials in a sports community the names of sports teams and cities are closely clustered showing that they are often used together in binomials Similarly we identify clusters of names numbers and years The presence of cycles in these graphs are also informative For example cycles are rare in graphs formed from proper names in politics suggesting a possible hierarchy of names and at the same time very common for other binomials This suggests that no such hierarchy exists for most of the English language further complicating attempts to predict binomial order Finally we expand our work to include multinomials which are lists of more than two words There already appears to be more structure in trinomials lists of three compared to binomials Trinomials are likely to appear in exactly one order and when they appear in more than one order the last word is almost always the same across all instances For instance in one section of our Reddit data Fraud Waste and Abuse appears 34 times and Waste Fraud and Abuse appears 20 times This could point to for example recency principles being more important in lists of three than in lists of two While multinomials were in principle part of the scope of past research in this area they were difficult to study in smaller corpora suggesting another benefit of working at our current scale Interest in list orderings spans the last century BIBREF10 BIBREF1 with a focus almost exclusively on binomials This research has primarily investigated frozen binomials also called irreversible binomials fixed coordinates and fixed conjuncts BIBREF11 although some work has also looked at noncoordinate freezes where the individual words are nonsensical by themselves eg dribs and drabs BIBREF11 One study has directly addressed mostly frozen binomials BIBREF5 and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order Early research investigated languages other than English BIBREF1 BIBREF10 but most recent research has worked almost exclusively with English Overall this prior research can be separated into three basic categories phonological rules semantic rules and metadata rules Phonology The earliest research on binomial orderings proposed mostly phonological explanations particularly rhythm BIBREF0 BIBREF12 Another highly supported proposal is Paninis Law which claims that words with fewer syllables come first BIBREF17 we find only very mild preference for this type of ordering Cooper and Rosss work expands these to a large list of rules many overlapping and suggests that they can compound BIBREF3 a number of subsequent papers have expanded on their work BIBREF11 BIBREF15 BIBREF9 BIBREF17 Semantics There have also been a number of semantic explanations mostly in the form of categorical tendencies such as desirable before undesirable that may have cultural differences BIBREF10 BIBREF1 The most influential of these may be the Me First principle codified by Cooper and Ross This suggests that the first word of a binomial tends to follow a hierarchy that favors here now present generation adult male and positive Additional hierarchies also include a hierarchy of food plants vs animals etc BIBREF3 Frequency More recently it has been proposed that the more cognitively accessible word might come first which often means the word the author sees or uses most frequently BIBREF18 There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13 BIBREF4 We find that frequency is in general a poor predictor of word ordering Combinations Given the number of theories there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4 BIBREF5 This research has complemented the proposals of Cooper and Ross BIBREF3 These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5 Names Work on the orderings of names has been dominated by a single phenomenon mens names usually come before womens names Explanations range from a power differential to men being more agentic within Me First to mens names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8 BIBREF5 BIBREF18 BIBREF3 BIBREF13 BIBREF9 BIBREF19 BIBREF6 However it has also been demonstrated that this preference may be affected by the authors own gender and relationship with the people named BIBREF6 BIBREF19 as well as context more generally BIBREF20 Orderings on the Web List orderings have also been explored in other Web data specifically on the ordering of tags applied to images BIBREF21 There is evidence that these tags are ordered intentionally by users and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21 Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22 We take our data mostly from Reddit a large social media website divided into subcommunities called subreddits or subs Each subreddit has a theme usually clearly expressed in its name and we have focused our study on subreddits primarily in sports and politics in part because of the richness of proper names in these domains rnba rnfl rpolitics rConservative rLibertarian rTheDonald rfood along with a variety of NBA team subreddits eg rrockets for the Houston Rockets Apart from the teamspecific and food subreddits these are among the largest and most heavily used subreddits BIBREF23 We gather text data from comments made by users in discussion threads In all cases we have data from when the subreddit started until mid2018 Data was contributed by Cristian DanescuNiculescuMizil Reddit in general and the subreddits we examined in particular are rapidly growing both in terms of number of users and number of comments Some of the subreddits we looked at particularly sports subreddits exhibited very distinctive seasons where commenting spikes Fig FIGREF2 These align with eg the season of the given sport When studying data across time our convention is to bin the data by year but we adjust the starting point of a year based on these seasons Specifically a year starts in May for rnfl August for rnba and February for all politics subreddits We use two methods to identify lists from user comments All Words and Names Only with the latter focusing on proper names In both cases we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined see Table TABREF3 for summary statistics The All Words method simply searches for two words A and B separated by and or or where a word is merely a series of characters separated by a space or punctuation This process only captures lists of length two or binomials We then filter out lists containing words from a collection of stopwords that by their grammatical role or formatting structure are almost exclusively involved in false positive lists No metadata is captured for these lists beyond the month and year of posting The Names Only method uses a curated list of full names relevant to the subreddit focusing on sports and politics For sports we collected names of all NBA and NFL player active during 19802019 from basketballreferencecom and profootballreferencecom For politics we collected the names of congresspeople from the unitedstates project BIBREF24 To form lists we search for any combination of any part of these names such that at least two partial names are separated by and or vs vs or and the rest are separated by While we included a variety of separators about 83 of lists include only and about 17 include or and the rest of the separators are negligible Most lists that we retrieve in this way are of length 2 but we also found lists up to length 40 Fig FIGREF5 Finally we also captured full metadata for these lists including a timestamp the user any flairs attributed to the user short custom text that appears next to the username and other information We additionally used wine reviews and a variety of news paper articles for additional analysis The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25 While not specifically dated the reviews were scraped between June and November of 2017 There are 20 different reviewers included but the amount of reviews each has ranges from tens to thousands The news data consists of news articles pulled from a variety of sources including in random order the New York Times Breitbart CNN the Atlantic Buzzfeed News National Review New York Post NPR Reuters and the Washington Post The articles are primarily from 2016 and early 2017 with a few from 2015 The articles are scraped from homepage headline and RSS feeds BIBREF26 Metadata was limited for both of these data sets In this paper we introduce a new framework to interpret binomials based on three properties asymmetry how frozen a binomial is movement how binomial orderings change over time and agreement how consistent binomial orderings are between communities which we will visualize as a cube with three dimensions Again prior work has focused essentially entirely on asymmetry and we argue that this can only really be understood in the context of the other two dimensions For this paper we will use the convention AB to refer to an unordered pair of words and AB to refer to an ordered pair where A comes before B We say that AB and BA are the two possible orientations of AB Previous work has one main measure of binomials their frozenness A binomial is frozen if it always appears with a particular order For example if the pair arrow bow always occurs as bow arrow and never as arrow bow then it is frozen This leaves open the question of how describe the large number of binomials that are not frozen To address this point we instead consider the ordinality of a list or how often the list is in order according to some arbitrary underlying reference order Unless otherwise specified the underlying order is assumed to be alphabetical If the list cat dog appears 40 times and the list dog cat 10 times then the list cat dog would have an ordinality of 08 Let nxy be the number of times the ordered list xy appears and let fxy nxy nxy nyx be the fraction of times that the unordered version of the list appears in that order We formalize ordinality as follows Ordinality Given an ordering on words by default we assume alphabetical ordering the ordinality oxy of the pair lbrace xyrbrace is equal to fxy if x y and fyx otherwise Similarly we introduce the concept of asymmetry in the context of binomials which is how often the word appears in its dominant order In our framework a frozen list is one with ordinality 0 or 1 and would be considered a high asymmetry list with asymmetry of 1 A list that appears as A B half of the time and B A half of the time or with ordinality 05 would be considered a low asymmetry list with asymmetry of 0 Asymmetry The asymmetry of an unordered list lbrace xyrbrace is Axy 2 cdot vert oxy 05 vert The Reddit data described above gives us access to new dimensions of binomials not previously addressed We define movement as how the ordinality of a list changes over time Movement Let oxyt be the ordinality of an unordered list lbrace xyrbrace for data in year t in T The movement of lbrace xyrbrace is Mxy max t in T oxyt min t in T oxyt And agreement describes how the ordinality of a list differs between different communities Agreement Let oxyc be the ordinality of an unordered list xy for data in community subreddit c in C The agreement of lbrace xyrbrace is Axy 1 max c in C oxyc min c in C oxyc Let the point AMGxy be a vector of the asymmetry movement and agreement for some unordered list lbrace xyrbrace These vectors then define a 3dimensional space in which each list occupies a point Since our measures for asymmetry agreement and movement are all defined from 0 to 1 their domains form a unit cube Fig FIGREF8 The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s By examining points near the corners of this cube we can get a better understanding of the range of binomials Some corners are natural it is easy to imagine a high asymmetry low movement high agreement binomial such as arrow bow from earlier On the other hand we have found no good examples of a high asymmetry low movement low agreement binomial There are a few unusual examples such as 10 20 which has 04 asymmetry 02 movement and 01 agreement and is clearly visible as an isolated point in Fig FIGREF8 Asymmetry While a majority of binomials have low asymmetry almost all previous work has focused exclusively on highasymmetry binomials In fact asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials Fig FIGREF9 This implies that previous work has overlooked the vast majority of binomials and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis Movement The vast majority of binomials have low movement However the exceptions to this can be very informative Within rnba a few of these pairs show clear change in linguistics andor culture The binomial rpm vorp a pair of basketball statistics started at 074 ordinality and within three years dropped to 032 ordinality showing a potential change in users representation of how these statistics relate to each other In rpolitics daughter son moved from 007 ordinality to 036 ordinality over ten years This may represent a cultural shift in how users refer to children or a shift in topics discussed relating to children And in rpolitics dems obama went from 075 ordinality to 043 ordinality from 20092018 potentially reflecting changes in Obamas role as a defining feature of the Democratic Party Meanwhile the ratio of unigram frequency of dems to obama actually increased from 10 to 20 from 2010 to 2017 Similarly fdr lincoln moved from 049 ordinality to 017 ordinality from 20152018 This is particularly interesting since in 2016 fdr had a unigram frequency 20 higher than lincoln but in 2017 they are almost the same This suggests that movement could be unrelated to unigram frequency changes Note also that the covariance for movement across subreddits is quite low TABREF10 and movement in one subreddit is not necessarily reflected by movement in another Agreement Most binomials have high agreement Table TABREF11 but again the counterexamples are informative For instance score kick has ordinality of 0921 in rnba and 0204 in rnfl This likely points to the fact that American football includes field goals A less obvious example is the list ceiling floor In rnba and rnfl it has ordinality 044 and in rpolitics it has ordinality 027 There are also differences among proper nouns One example is france israel which has ordinality 06 in rpolitics 016 in rLibertarian and 051 in rTheDonald and the list does not appear in rConservative And the list romney trump has ordinality 048 in rpoltics 055 in rTheDonald and 073 in rConservative In this section we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way With this we would expect to see variation in binomial asymmetry We find that our data shows smaller variation than this null model predicts suggesting that binomial orderings are extremely stable across communities and time From this we might also expect that orderings are predictable but we find that standard predictors in fact have limited success Recall that the asymmetry of binomials with respect to alphabetic order excluding frozen binomials is roughly normal centered around 05 Fig FIGREF9 One way of seeing this type of distribution would be if binomials are ordered randomly with p05 for each order In this case if each instance l of a binomial lbrace xyrbrace takes value 0 nonalphabetical ordering or 1 alphabetical ordering then l sim textBernoulli05 If lbrace xyrbrace appears n times then the number of instances of value 1 is distributed by W sim textBinn 05 and W n is approximately normally distributed with mean 05 One way to test this behavior is to first estimate p for each list within each community If the differences in these estimates are not normal then the above model is incorrect We first omit frozen binomials before any analysis Let L be a set of unordered lists and C be a set of communities We estimate p for list l in L in community c in C by hatplc olc the ordinality of l in C Next for all l in L let pl max c in Chatpl c min c in Chatpl c The distribution of pl over l in L has median 0 mean 00145 and standard deviation 00344 We can perform a similar analysis over time Define Y as our set of years and hatpl y oly for y in Y our estimates The distribution of pprime l max y in Yhatpl y min y in Yhatpl y over l in L has median 00216 mean 00685 and standard deviation 00856 The fact that p varies very little across both time and communities suggests that there is some pl for each l in L that is consistent across time and communities which is not the case in the null model where these values would be normally distributed We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years Specifically let ol c y be the ordinality of list l in community c and year y Ol be the set of olcy for a given list l and sl be the standard deviation of Ol Finally let bars be the average of the sl We resample data by randomizing the order of each binomial instance sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities pl We repeated this process to get samples estimates lbrace bars1 ldots barskrbrace where k is the size of the set of seasons and communities These averages range from 00277 to 00278 and are approximately normally distributed each is a mean over an approximately normal scaled Binomial random variable However bars 00253 for our nonrandomized data This is significantly smaller than the randomized data and implies that the true variation in pl across time and communities is even smaller than a binomial distribution would predict One possible explanation for this is that each instance of l is not actually independent but is in fact anticorrelated violating one of the conditions of the binomial distribution An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1 but it is an open question what the true model is and why the variation is so low Regardless it is clear that the orientation of binomials varies very little across years and communities Fig FIGREF13 Given the stability of binomials within our data we now try to predict their ordering We consider deterministic or rulebased methods that predict the order for a given binomial We use two classes of evaluation measures for success on this task i by token judging each instance of a binomial separately and ii by type judging all instances of a particular binomial together We further characterize these into weighted and unweighted To formalize these notions first consider any unordered list lbrace xyrbrace that appears nxy times in the orientation xy and nyx times in the orientation yx Since we can only guess one order we will have either nxy or nyx successful guesses for lbrace xyrbrace when guessing by token The unweighted token score UO and weighted token score WO are the macro and micro averages of this accuracy If predicting by type let S be the lists such that the bytoken prediction is successful at least half of the time Then the unweighted type score UT and weighted type score WT are the macro and micro averages of S Basic Features We first use predictors based on rules that have previously been proposed in the literature word length number of phonemes number of syllables alphabetical order and frequency We collect all binomials but make predictions only on binomials appearing at least 30 times total stratified by subreddit However none of these features appear to be particularly predictive across the board Table TABREF15 A simple linear regression model predicts close to random which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B and A is monosyllabic and B is disyllabic then A comes before B BIBREF0 Within rpolitics we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 Tables TABREF16 and TABREF17 In a weak sense Jespersen was correct that monosyllabic words come before disyllabic words more often than not and more generally shorter words come before longer words more often than not However as predictors these principles are close to random guessing Paired Predictions Another measure of predictive power is predicting which of two binomials has higher asymmetry In this case we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures we use the top1000 and bottom1000 binomials in terms of asymmetry for these tasks For instance we may predict that red turquoise is more asymmetric than red blue because the differences in lengths is more extreme Overall the basic predictors from the literature are not very successful Table TABREF18 Word Embeddings If we turn to more modern approaches to text analysis one of the most common is word embeddings BIBREF16 Word embeddings assign a vector xi to each word i in the corpus such that the relative position of these vectors in space encode information lingustically relevant relationships among the words Using the Google News word embeddings via a simple logistic model we produce a vector v and predict the ordering of a binomial on words i and j from v cdot xi xj In this sense v can be thought of as a sweepline direction through the space containing the word vectors such that the ordering along this sweepline is the predicted ordering of all binomials in the corpus This yields surprisingly accurate results with accuracy ranging from 70 to 85 across various subreddits Table TABREF20 and 80100 accuracy on frozen binomials This is by far the best prediction method we tested It is important to note that not all words in our binomials could be associated with an embedding so it was necessary to remove binomials containing words such as names or slang However retesting our basic features on this data set did not show any improvement implying that the drastic change in predictive power is not due to the changed data set Proper nouns and names in particular have been a focus within the literature on frozen binomials BIBREF8 BIBREF5 BIBREF18 BIBREF3 BIBREF13 BIBREF9 BIBREF6 BIBREF19 BIBREF20 BIBREF28 but these studies have largely concentrated on the effect of gender in ordering BIBREF8 BIBREF5 BIBREF18 BIBREF3 BIBREF13 BIBREF9 BIBREF6 BIBREF19 BIBREF20 With Reddit data however we have many conversations about large numbers of celebrities with significant background information on each As such we can investigate proper nouns in three subreddits rnba rnfl and rpolitics The names we used are from NBA and NFL players 19702019 and congresspeople pre1800 and 20002019 respectively We also investigated names of entities for which users might feel a strong sense of identification such as a team or political group they support or a subreddit to which they subscribe We hypothesized that the group with which the user identifies the most would come first in binomial orderings Inspired by the Me First Principle we call this the Proximity Principle First we examined names in rnba One advantage of using NBA players is that we have detailed statistics for ever player in every year We tested a number of these statistics and while all of them predicted statistically significant numbers p 1e6 of binomials they were still not very predictive in a practical sense Table TABREF23 The best predictor was actually how often the players team was mentioned Interestingly the unigram frequency number of times the players name was mentioned overall was not a good predictor It is relevant to these observations that some team subreddits and thus presumably fanbases are significantly larger than others Additionally we also investigated lists of names of sports teams and subreddits as proper nouns In this case we exploit an interesting structure of the rnba subreddit which is not evident at scale in other subreddits we examined In addition to rnba there exists a number of subreddits that are affiliated with a particular NBA team with the purpose of allowing discussion between fans of that team This implies that most users in a team subreddit are fans of that team We are then able to look for lists of NBA teams by name city and abbreviation We found 2520 instances of the subreddit team coming first and 1894 instances of the subreddit team coming second While this is not a particularly strong predictor correctly predicting 57 of lists it is one of the strongest we found and a clear illustration of the Proximity Principle We can do a similar calculation with subreddit names by looking between subreddits While the team subreddits are not large enough for this calculation many of the other subreddits are We find that lists of subreddits in rnba that include rnba often start with rnba and a similar result holds for rnfl Table TABREF25 While NBA team subreddits show a fairly strong preference to name themselves first this preference is slightly less strong among sport subreddits and even less strong among politics subreddits One potential factor here is that rpolitics is a more general subreddit while the rest are more specific perhaps akin to rnba and the team subreddits In our case political names are drawn from every congressperson and their nicknames in both houses of the US Congress through the 2018 election It is worth noting that one of these people is Philadelph Van Trump It is presumed that most references to trump refer to Donald Trump There may be additional instances of mistaken identities We restrict the names to only congresspeople that served before 1801 or after 1999 also including trump One might guess that political subreddits refer to politicians of their preferred party first However this was not the case as Republicans are mentioned first only about 4346 of the time in all subreddits Table TABREF27 On the other hand the Proximity Principle does seem to come into play when discussing ideology For instance rpolitics a leftleaning subreddit is more likely to say democrats and republicans while the other political subreddits in our study which are rightleaning are more likely to say republicans and democrats Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name We restrict our investigation to names that are not also English words and only names that have a unigram frequency of at least 30 The average ratio is 00535 but there is significant variation across names It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company While Reddit provides a very large corpus of informal text McGuire and McGuire make a distinct separation between informal and formal text BIBREF28 As such we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications Both data sets follow the same basic principles outlined above Wine reviews are a highly stylized form of text In this case reviews are often just a few sentences and they use a specialized vocabulary meant for wine tasting While one might hypothesize that such stylized text exhibits more frozen binomials this is not the case Tab TABREF28 There is some evidence of an additional freezing effect in binomials such as aromas flavors and scents flavors which both are frozen in the wine reviews but are not frozen on Reddit However this does not seem to have a more general effect Additionally there are a number of binomials which appear frozen on Reddit but have low asymmetry in the wine reviews such as lemon lime We focused our analysis on NYT Buzzfeed Reuters CNN the Washington Post NPR Breitbart and the Atlantic Much like in political subreddits one might expect to see a split between various publications based upon ideology However this is not obviously the case While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications Buzzfeed in particular frequently goes against the grain there does not seem to be a sharp divide Individual examples are difficult to draw conclusions from but can suggest trends China Russia is a particularly controversial binomial While the publications vary quite a bit only Breitbart has an ordinality of above 05 In fact country pairs are among the most controversial binomials within the publications eg iraq syria afghanisatan iraq while most other highly controversial binomials reflect other political structures such as house senate migrants refugees and left right That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications Additionally the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with rTheDonald mainly large amounts of quoted text We can discover new structure in binomial orderings by taking a more global view We do this by building directed graphs based on ordinality In these graphs nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order AB at least 50 of the time For our visualizations the size of the node indicates how many distinct lists the word appears inand color indicates how many list instances contain the word in total If we examine the global structure for rnba we can pinpoint a number of patterns Fig FIGREF31 First most nodes within the purple circle correspond to names while most nodes outside of it are not names The cluster of circles in the lower left are a combination of numbers and years where dark green corresponds to numbers purple corresponds to years and pink corresponds years represented as twodigit numbers eg 96 On the right the brown circle contains adjectives while above the blue circle contains heights eg 65 and in the two circles in the lower middle the left contains cities while the right contains team names The darkest red node in the center of the graph corresponds to lebron Constructing a similar graph for our wines dataset we can see clusters of words In Fig FIGREF32 the colors represent clusters as formed through modularity These clusters are quite distinct Green nodes mostly refer to the structure or body of a wine red are adjectives describing taste teal and purple are fruits dark green is wine varietals gold is senses and light blue is time eg year decade etc We can also consider the graph as we change the threshold of asymmetry for which an edge is included If the asymmetry is large enough the graph is acyclic and we can consider how small the ordinality threshold must be in order to introduce a cycle These cycles reveal the nonglobal ordering of binomials The graph for rnba begins to show cycles with a threshold asymmetry of 097 Three cycles exist at this threshold ball catch shooter court pass set athleticism and court plays set athleticism Restricting the nodes to be names is also revealing Acyclic graphs in this context suggest a global partial hierarchy of individuals For rnba the graph is no longer acyclic at an asymmetry threshold of 076 with the cycle blake jordan bryant kobe Similarly the graph for rnfl only including names is acyclic until the threshold reaches 073 with cycles tannehill miller jj watt aaron rodgers brady and hoyer savage watson hopkins miller jj watt aaron rodgers brady Figure FIGREF33 shows these graphs for the three political subreddits where the nodes are the 30 most common politician names The graph visualizations immediately show that these communities view politicians differently We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 09 Again this suggests that at least among frozen binomials there is in fact a global partial order of names that might signal hierarchy Including nonnames though causes the rpolitics graph to never be acyclic for any asymmetry threshold since the cycle furious benghazi fast consists of completely frozen binomials We find similar results for rConservative and rLibertarian which are acyclic with thresholds of 058 and 066 respectively Some of these cycles at high asymmetry might be due to English words that are also names eg law but one particularly notable cycle from rConservative is rubio bush obama trump cruz Binomials are the most studied type of list but trinomials lists of three are also common enough in our dataset to analyze Studying trinomials adds new aspects to the set of questions for example while binomials have only two possible orderings trinomials have six possible orderings However very few trinomials show up in all six orderings In fact many trinomials show up in exactly one ordering about 36 of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data To get a baseline comparison we found an equal number of the most common binomials and then subsampled instances of those binomials to equate the number of instances with the trinomials In this case only 21 of binomials are frozen For trinomials that show up in at least two orderings it is most common for the last word to keep the same position eg a b c and b a c For example in our data fraud waste abuse appears 34 times and waste fraud abuse appears 20 times This may partially be explained by many lists that contain words such as other whatever or more for instance smarter better more and better smarter more are the only two orderings we observe for this set of three words Additionally each trinomial a b c contains three binomials within it a b b c and a c It is natural to compare orderings of a b in general with orderings of occurrences of a b that lie inside trinomials We use this comparison to define the compatibility of a b as follows Compatibility Let a b be a binomial with dominant ordering a b that is a b is at least as frequent as b a We define the compatibility of a b to be the fraction of instances of a b occurring inside trinomials that have the order ab There are only a few cases where binomials have compatibility less than 05 and for most binomials the asymmetry is remarkably consistent between binomials and trinomials Fig FIGREF37 In general asymmetry is larger than compatibility this occurs for 4569 binomials compared to 3575 where compatibility was greater and 690 where the two values are the same An extreme example is the binomial fairness accuracy which has asymmetry 077 and compatibility 022 It would be natural to consider these questions for tetranomials and longer lists but these are rarer in our data and correspondingly harder to draw conclusions from Analyzing binomial orderings on a large scale has led to surprising results Although most binomials are not frozen in the traditional sense there is little movement in their ordinality across time or communities A list that appears in the order A B 60 of the time in one subreddit in one year is likely to show up as A B very close to 60 of the time in all subreddits in all years This suggests that binomial order should be predictable but there is evidence that this is difficult the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general Given the challenge in predicting orderings we searched for methods or principles that could yield better performance and identified two promising approaches First models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories Second we established the Proximity Principle the proper noun with which a speaker identifies more will tend to come first This is evidenced when commenters refer to their sports team first or politicians refer to their party first Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different We have also found promising results in some special cases We expect that more domainspecific studies will offer rich structure It is a challenge to adapt the long history of work on the question of frozen binomials to the large messy environment of online text and social media However such data sources offer a unique opportunity to reexplore and redefine these questions It seems that binomial orderings offer new insights into language culture and human cognition Understanding what changes in these highly stable conventions mean and whether or not they can be predicted is an interesting avenue for future research The authors thank members of the Cornell AI Policy and Practice Group and alphabetically by first name Cristian DanescuNiculescuMizil Ian Lomeli Justine Zhang and Kate Donahue for aid in accessing data and their thoughtful insight This research was supported by NSF Award DMS1830274 ARO Award W911NF1910057 a Simons Investigator Award a Vannevar Bush Faculty Fellowship and ARO MURI </s>
<s>[INST] Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism [/INST] Literary critics form interpretations of meaning in works of literature Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 Within natural language processing most previous work that engages with literature relies on distant reading BIBREF1 which involves discovering highlevel patterns from large collections of stories BIBREF2 BIBREF3 We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance concretely we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino Framed as a dialogue between the traveler Marco Polo and the emperor Kublai Khan Invisible Cities consists of 55 prose poems each of which describes an imaginary city Calvino categorizes these cities into eleven thematic groups that deal with human emotions eg desires memories general objects eyes sky signs and unusual properties continuous hidden thin Many critics argue that Calvinos labels are not meaningful while others believe that there is a distinct thematic separation between the groups including the author himself BIBREF4 The unique structure of this novel each citys description is short and selfcontained Figure FIGREF1 allows us to computationally examine this debate As the book is too small to train any models we leverage recent advances in largescale language modelbased representations BIBREF5 BIBREF6 to compute a representation of each city We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvinos original labels and crowdsourced human judgments While the overall correlation with Calvinos labels is low both computers and humans can reliably identify some thematic groups associated with concrete objects While prior work has computationally analyzed a single book BIBREF7 our work goes beyond simple word frequency or ngram counts by leveraging the power of pretrained language models to engage with literary criticism Admittedly our approach and evaluations are specific to Invisible Cities but we believe that similar analyses of more conventionallystructured novels could become possible as text representation methods improve We also highlight two challenges of applying computational methods to literary criticisms 1 text representation methods are imperfect especially when given writing as complex as Calvinos and 2 evaluation is difficult because there is no consensus among literary critics on a single correct interpretation Before describing our method and results we first review critical opinions on both sides of whether Calvinos thematic groups meaningfully characterize his city descriptions We focus on measuring to what extent computers can recover Calvinos thematic groupings when given just raw text of the city descriptions At a high level our approach Figure FIGREF4 involves 1 computing a vector representation for every city and 2 performing unsupervised clustering of these representations The rest of this section describes both of these steps in more detail While each of the city descriptions is relatively short Calvinos writing is filled with rare words complex syntactic structures and figurative language Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text Nevertheless we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions We experiment with three different pretrained representations ELMo BIBREF5 BERT BIBREF6 and GloVe BIBREF18 To produce a single city embedding we compute the TFIDF weighted elementwise mean of the tokenlevel representations For all pretrained methods we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm Given 55 city representations how do we group them into eleven clusters of five cities each Initially we experimented with a graphbased community detection algorithm that maximizes cluster modularity BIBREF20 but we found no simple way to constrain this method to produce a specific number of equallysized clusters The brute force approach of enumerating all possible cluster assignments is intractable given the large search space INLINEFORM0 possible assignments We devise a simple clustering algorithm to approximate this process First we initialize with random cluster assignments and define cluster strength to be the relative difference between intragroup Euclidean distance and intergroup Euclidean distance Then we iteratively propose random exchanges of memberships only accepting these proposals when the cluster strength increases until convergence To evaluate the quality of the computationallyderived clusters against those of Calvino we measure cluster purity BIBREF21 given a set of predicted clusters INLINEFORM1 and groundtruth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points INLINEFORM4 While the results from the above section allow us to compare our three computational methods against each other we additionally collect human judgments to further ground our results In this section we first describe our human experiment before quantitatively analyzing our results We compare clusters computed on different representations using community purity additionally we compare these computational methods to humans by their accuracy on the oddoneout task City representations computed using language modelbased representation ELMo and BERT achieve significantly higher purity than a clustering induced from random representations indicating that there is at least some meaningful coherence to Calvinos thematic groups first row of Table TABREF11 ELMo representations yield the highest purity among the three methods which is surprising as BERT is a bigger model trained on data from books among other domains Both ELMo and BERT outperform GloVe which intuitively makes sense because the latter do not model the order or structure of the words in each description While the purity of our methods is higher than that of a random clustering it is still far below 1 To provide additional context to these results we now switch to our oddoneout task and compare directly to human performance For each triplet of cities we identify the intruder as the city with the maximum Euclidean distance from the other two Interestingly crowd workers achieve only slightly higher accuracy than ELMo city representations their interannotator agreement is also low which indicates that close reading to analyze literary coherence between multiple texts is a difficult task even for human annotators Overall results from both computational and human approaches suggests that the authorassigned labels are not entirely arbitrary as we can reliably recover some of the thematic groups Our quantitative results suggest that while vectorbased city representations capture some thematic similarities there is much room for improvement In this section we first investigate whether the learned clusters provide evidence for any arguments put forth by literary critics on the novel Then we explore possible reasons that the learned clusters deviate from Calvinos Most previous work within the NLP community applies distant reading BIBREF1 to large collections of books focusing on modeling different aspects of narratives such as plots and event sequences BIBREF22 BIBREF23 BIBREF24 BIBREF25 characters BIBREF2 BIBREF26 BIBREF27 BIBREF28 and narrative similarity BIBREF3 In the same vein researchers in computational literary analysis have combined statistical techniques and linguistics theories to perform quantitative analysis on large narrative texts BIBREF29 BIBREF30 BIBREF31 BIBREF32 BIBREF33 but these attempts largely rely on techniques such as word counting topic modeling and naive Bayes classifiers and are therefore not able to capture the meaning of sentences or paragraphs BIBREF34 While these works discover general patterns from multiple literary works we are the first to use cuttingedge NLP techniques to engage with specific literary criticism about a single narrative There has been other computational work that focuses on just a single book or a small number of books much of it focused on network analysis BIBREF35 extract character social networks from Alice in Wonderland while BIBREF36 recover social networks from 19th century British novels BIBREF37 disentangles multiple narrative threads within the novel Infinite Jest while BIBREF7 provides several automated statistical methods for close reading and test them on the awardwinning novel Cloud Atlas 2004 Compared to this work we push further on modeling the content of the narrative by leveraging pretrained language models Our work takes a first step towards computationally engaging with literary criticism on a single book using stateoftheart text representation methods While we demonstrate that NLP techniques can be used to support literary analyses and obtain new insights they also have clear limitations eg in understanding abstract themes As text representation methods become more powerful we hope that 1 computational tools will become useful for analyzing novels with more conventional structures and 2 literary criticism will be used as a testbed for evaluating representations We thank the anonymous reviewers for their insightful comments Additionally we thank Nader Akoury Garrett Bernstein Chenghao Lv Ari Kobren Kalpesh Krishna Saumya Lal Tu Vu Zhichao Yang Mengxue Zhang and the UMass NLP group for suggestions that improved the papers clarity coverage of related work and analysis experiments </s>
<s>[INST] Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation [/INST] A Dialogue State Tracker DST is a core component of a modular taskoriented dialogue system BIBREF7 For each dialogue turn a DST module takes a user utterance and the dialogue history as input and outputs a belief estimate of the dialogue state Then a machine action is decided based on the dialogue state according to a dialogue policy module after which a machine response is generated Traditionally a dialogue state consists of a set of requests and joint goals both of which are represented by a set of slotvalue pairs eg request phone area north food Japanese BIBREF8 In a recently proposed multidomain dialogue state tracking dataset MultiWoZ BIBREF9 a representation of dialogue state consists of a hierarchical structure of domain slot and value is proposed This is a more practical scenario since dialogues often include multiple domains simultaneously Many recently proposed DSTs BIBREF2 BIBREF10 are based on predefined ontology lists that specify all possible slot values in advance To generate a distribution over the candidate set previous works often take each of the slotvalue pairs as input for scoring However in realworld scenarios it is often not practical to enumerate all possible slot value pairs and perform scoring from a large dynamically changing knowledge base BIBREF11 To tackle this problem a popular direction is to build a fixedlength candidate set that is dynamically updated throughout the dialogue development cpt briefly summaries the inference time complexity of multiple stateoftheart DST models following this direction Since the inference complexity of all of previous model is at least proportional to the number of the slots these models will struggle to scale to multidomain datasets with much larger numbers of predefined slots In this work we formulate the dialogue state tracking task as a sequence generation problem instead of formulating the task as a pairwise prediction problem as in existing work We propose the COnditional MEmory Relation Network COMER a scalable and accurate dialogue state tracker that has a constant inference time complexity Specifically our model consists of an encoderdecoder network with a hierarchically stacked decoder to first generate the slot sequences in the belief state and then for each slot generate the corresponding value sequences The parameters are shared among all of our decoders for the scalability of the depth of the hierarchical structure of the belief states COMER applies BERT contextualized word embeddings BIBREF12 and BPE BIBREF13 for sequence encoding to ensure the uniqueness of the representations of the unseen words The word embeddings for sequence generation are initialized and fixed with the static word embeddings generated from BERT to have the potential of generating unseen words f1 shows a multidomain dialogue in which the user wants the system to first help book a train and then reserve a hotel For each turn the DST will need to track the slotvalue pairs eg arrive by 2045 representing the user goals as well as the domain that the slotvalue pairs belongs to eg train hotel Instead of representing the belief state via a hierarchical structure one can also combine the domain and slot together to form a combined slotvalue pair eg train arrive by 2045 where the combined slot is train arrive by which ignores the subordination relationship between the domain and the slots A typical fallacy in dialogue state tracking datasets is that they make an assumption that the slot in a belief state can only be mapped to a single value in a dialogue turn We call this the single value assumption Figure 2 shows an example of this fallacy from the WoZ20 dataset Based on the belief state label food seafood it will be impossible for the downstream module in the dialogue system to generate sample responses that return information about Chinese restaurants A correct representation of the belief state could be food seafood chinese This would tell the system to first search the database for information about seafood and then Chinese restaurants The logical operator indicates which retrieved information should have a higher priority to be returned to the user Thus we are interested in building DST modules capable of generating structured sequences since this kind of sequence representation of the value is critical for accurately capturing the belief states of a dialogue Given a dialogue D which consists of T turns of user utterances and system actions our target is to predict the state at each turn Different from previous methods which formulate multilabel state prediction as a collection of binary prediction problems COMER adapts the task into a sequence generation problem via a Seq2Seq framework As shown in f3 COMER consists of three encoders and three hierarchically stacked decoders We propose a novel Conditional Memory Relation Decoder CMRD for sequence decoding Each encoder includes an embedding layer and a BiLSTM The encoders take in the user utterance the previous system actions and the previous belief states at the current turn and encodes them into the embedding space The user encoder and the system encoder use the fixed BERT model as the embedding layer Since the slot value pairs are unordered set elements of a domain in the belief states we first order the sequence of domain according to their frequencies as they appear in the training set BIBREF14 and then order the slot value pairs in the domain according to the slots frequencies of as they appear in a domain After the sorting of the state elements We represent the belief states following the paradigm Domain1 Slot1 Value1 Slot2 Value2 Domain2 Slot1 Value1 for a more concise representation compared with the nested tuple representation All the CMRDs take the same representations from the system encoder user encoder and the belief encoder as part of the input In the procedure of hierarchical sequence generation the first CMRD takes a zero vector for its condition input mathbf c and generates a sequence of the domains D as well as the hidden representation of domains HD For each d in D the second CMRD then takes the corresponding hd as the condition input and generates the slot sequence Sd and representations HSd Then for each s in S the third CMRD generates the value sequence D0 based on the corresponding D1 We update the belief state with the new D2 pairs and perform the procedure iteratively until a dialogue is completed All the CMR decoders share all of their parameters Since our model generates domains and slots instead of taking predefined slots as inputs and the number of domains and slots generated each turn is only related to the complexity of the contents covered in a specific dialogue the inference time complexity of COMER is O1 with respect to the number of predefined slots and values Let X represent a user utterance or system transcript consisting of a sequence of words lbrace w1ldots wTrbrace The encoder first passes the sequence lbrace mathit CLSw1ldots wTmathit SEPrbrace into a pretrained BERT model and obtains its contextual embeddings EX Specifically we leverage the output of all layers of BERT and take the average to obtain the contextual embeddings For each domainslot appeared in the training set if it has more than one word such as price range leave at etc we feed it into BERT and take the average of the word vectors to form the extra slot embedding Es In this way we map each domainslot to a fixed embedding which allows us to generate a domainslot as a whole instead of a token at each time step of domainslot sequence decoding We also construct a static vocabulary embedding Ev by feeding each token in the BERT vocabulary into BERT The final static word embedding E is the concatenation of the Ev and Es After we obtain the contextual embeddings for the user utterance system action and the static embeddings for the previous belief state we feed each of them into a Bidirectional LSTM BIBREF15 beginaligned
mathbf hat textrm BiLSTMmathbf eXat mathbf hat1
mathbf hut textrm BiLSTMmathbf eXut mathbf hut1
mathbf hbt textrm BiLSTMmathbf eXbt mathbf hbt1
mathbf ha0 mathbf hu0 mathbf hb0 c0
endaligned Eq 7 where c0 is the zeroinitialized hidden state for the BiLSTM The hidden size of the BiLSTM is dm2 We concatenate the forward and the backward hidden representations of each token from the BiLSTM to obtain the token representation mathbf hktin Rdm kin lbrace aubrbrace at each time step t The hidden states of all time steps are concatenated to obtain the final representation of Hkin RT times dm k in lbrace auBrbrace The parameters are shared between all of the BiLSTMs Inspired by Residual Dense Networks BIBREF16 EndtoEnd Memory Networks BIBREF17 and Relation Networks BIBREF18 we here propose the Conditional Memory Relation Decoder CMRD Given a token embedding mathbf ex CMRD outputs the next token s and the hidden representation hs with the hierarchical memory access of different encoded information sources HB Ha Hu and the relation reasoning under a certain given condition mathbf c
mathbf s mathbf hs textrm CMRDmathbf ex mathbf c HB Ha Hu
the final output matrices SHs in Rlstimes dm are concatenations of all generated mathbf s and mathbf hs respectively along the sequence length dimension where dm is the model size and ls is the generated sequence length The general structure of the CMR decoder is shown in Figure 4 Note that the CMR decoder can support additional memory sources by adding the residual connection and the attention block but here we only show the structure with three sources belief state representation HB system transcript representation Ha and user utterance representation Hu corresponding to a dialogue state tracking scenario Since we share the parameters between all of the decoders thus CMRD is actually a 2dimensional autoregressive model with respect to both the condition generation and the sequence generation task At each time step t the CMR decoder first embeds the token xt with a fixed token embedding Ein Rdetimes dv where de is the embedding size and dv is the vocabulary size The initial token x0 is CLS The embedded vector textbf ext is then encoded with an LSTM which emits a hidden representation textbf h0 in Rdm
textbf h0 textrm LSTMtextbf exttextbf qt1
where textbf qt is the hidden state of the LSTM textbf q0 is initialized with an average of the hidden states of the belief encoder the system encoder and the user encoder which produces HB Ha Hu respectively mathbf h0 is then summed elementwise with the condition representation mathbf cin Rdm to produce mathbf h1 which is 1 fed into the attention module 2 used for residual connection and 3 concatenated with other mathbf hi i1 to produce the concatenated working memory mathbf r0 for relation reasoning
mathbf h1 mathbf h0mathbf c
mathbf h2 mathbf h1textAttntextbeliefmathbf h1He
mathbf h3 mathbf h2textAttntextsysmathbf h2Ha
mathbf h4 mathbf h3textAttntextusrmathbf h3Hu
mathbf r mathbf h1oplus mathbf h2oplus mathbf h3oplus mathbf h4 in R4dm
where textAttnk kin lbrace textbelief textsystextusrrbrace are the attention modules applied respectively to HB Ha Hu and oplus means the concatenation operator The gradients are blocked for mathbf h1mathbf h2mathbf h3 during the backpropagation stage since we only need them to work as the supplementary memories for the relation reasoning followed The attention module takes a vector mathbf hin Rdm and a matrix Hin Rdmtimes l as input where l is the sequence length of the representation and outputs mathbf ha a weighted sum of the column vectors in H
mathbf a W1Tmathbf hmathbf b1 in Rdm
mathbf c textsoftmaxHTa in Rl
mathbf h Hmathbf c in Rdm
mathbf ha W2Tmathbf hmathbf b2 in Rdm
where the weights W1in Rdm times dm W2in Rdm times dm and the bias b1in Rdm b2in Rdm are the learnable parameters The order of the attention modules ie first attend to the system and the user and then the belief is decided empirically We can interpret this hierarchical structure as the internal order for the memory processing since from the daily life experience people tend to attend to the most contemporary memories systemuser utterance first and then attend to the older history belief states All of the parameters are shared between the attention modules The concatenated working memory mathbf r0 is then fed into a MultiLayer Perceptron MLP with four layers
mathbf r1 sigma W1Tmathbf r0mathbf b1
mathbf r2 sigma W2Tmathbf r1mathbf b2
mathbf r3 sigma W3Tmathbf r2mathbf b3
mathbf hs sigma W4Tmathbf r3mathbf b4
where sigma is a nonlinear activation and the weights W1 in R4dm times dm Wi in Rdm times dm and the bias b1 in Rdm bi in Rdm are learnable parameters and 2le ile 4 The number of layers for the MLP is decided by the grid search The hidden representation of the next token mathbf hs is then 1 emitted out of the decoder as a representation and 2 fed into a dropout layer with drop rate p and a linear layer to generate the next token
mathbf hk textdropoutmathbf hs in Rdm
mathbf ho WkTmathbf hkmathbf bk in Rde
mathbf ps textsoftmaxETmathbf ho in Rdv
s textargmaxmathbf ps in R
where the weight Wkin Rdm times de and the bias bkin Rde are learnable parameters Since de is the embedding size and the model parameters are independent of the vocabulary size the CMR decoder can make predictions on a dynamic vocabulary and implicitly supports the generation of unseen words When training the model we minimize the crossentropy loss between the output probabilities mathbf ps and the given labels We first test our model on the single domain dataset WoZ20 BIBREF19 It consists of 1200 dialogues from the restaurant reservation domain with three predefined slots food price range and area Since the name slot rarely occurs in the dataset it is not included in our experiments following previous literature BIBREF3 BIBREF20 Our model is also tested on the multidomain dataset MultiWoZ BIBREF9 It has a more complex ontology with 7 domains and 25 predefined slots Since the combined slotvalue pairs representation of the belief states has to be applied for the model with On ITC the total number of slots is 35 The statistics of these two datsets are shown in Table 2 Based on the statistics from these two datasets we can calculate the theoretical Inference Time Multiplier ITM K as a metric of scalability Given the inference time complexity ITM measures how many times a model will be slower when being transferred from the WoZ20 dataset d1 to the MultiWoZ dataset d2
K hthshnhm
hxleftlbrace
beginarraylcl
1 OxO1
fracxd2xd1 textotherwise
endarrayright
where Ox means the Inference Time Complexity ITC of the variable x For a model having an ITC of O1 with respect to the number of slots n and values m the ITM will be a multiplier of 215x while for an ITC of On it will be a multiplier of 251 and 1143 for Omn As a convention the metric of joint goal accuracy is used to compare our model to previous work The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system In this work the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement We use the textBERTtextlarge model for both contextual and static embedding generation All LSTMs in the model are stacked with 2 layers and only the output of the last layer is taken as a hidden representation ReLU nonlinearity is used for the activation function sigma The hyperparameters of our model are identical for both the WoZ20 and the MultiwoZ datasets dropout rate p05 model size dm512 embedding size de1024 For training on WoZ20 the model is trained with a batch size of 32 and the ADAM optimizer BIBREF21 for 150 epochs while for MultiWoZ the AMSGrad optimizer BIBREF22 and a batch size of 16 is adopted for 15 epochs of training For both optimizers we use a learning rate of 00005 with a gradient clip of 20 We initialize all weights in our model with Kaiming initialization BIBREF23 and adopt zero initialization for the bias All experiments are conducted on a single NVIDIA GTX 1080Ti GPU To measure the actual inference time multiplier of our model we evaluate the runtime of the bestperforming models on the validation sets of both the WoZ20 and MultiWoZ datasets During evaluation we set the batch size to 1 to avoid the influence of data parallelism and sequence padding On the validation set of WoZ20 we obtain a runtime of 656 seconds while on MultiWoZ the runtime is 8352 seconds Results are averaged across 5 runs Considering that the validation set of MultiWoZ is 5 times larger than that of WoZ20 the actual inference time multiplier is 254 for our model Since the actual inference time multiplier roughly of the same magnitude as the theoretical value of 215 we can confirm empirically that we have the O1 inference time complexity and thus obtain full scalability to the number of slots and values predefined in an ontology c compares our model with the previous stateoftheart on both the WoZ20 test set and the MultiWoZ test set For the WoZ20 dataset we maintain performance at the level of the stateoftheart with a marginal drop of 03 compared with previous work Considering the fact that WoZ20 is a relatively small dataset this small difference does not represent a significant big performance drop On the mulidomain dataset MultiWoZ our model achieves a joint goal accuracy of 4572 which is significant better than most of the previous models other than TRADE which applies the copy mechanism and gains better generalization ability on named entity coping To prove the effectiveness of our structure of the Conditional Memory Relation Decoder CMRD we conduct ablation experiments on the WoZ20 dataset We observe an accuracy drop of 195 after removing residual connections and the hierarchical stack of our attention modules This proves the effectiveness of our hierarchical attention design After the MLP is replaced with a linear layer of hidden size 512 and the ReLU activation function the accuracy further drops by 345 This drop is partly due to the reduction of the number of the model parameters but it also proves that stacking more layers in an MLP can improve the relational reasoning performance given a concatenation of multiple representations from different sources We also conduct the ablation study on the MultiWoZ dataset for a more precise analysis on the hierarchical generation process For joint domain accuracy we calculate the probability that all domains generated in each turn are exactly matched with the labels provided The joint domainslot accuracy further calculate the probability that all domains and slots generated are correct while the joint goal accuracy requires all the domains slots and values generated are exactly matched with the labels From abm We can further calculate that given the correct slot prediction COMER has 8352 chance to make the correct value prediction While COMER has done great job on domain prediction 9553 and value prediction 8352 the accuracy of the slot prediction given the correct domain is only 5730 We suspect that this is because we only use the previous belief state to represent the dialogue history and the interturn reasoning ability on the slot prediction suffers from the limited context and the accuracy is harmed due to the multiturn mapping problem BIBREF4 We can also see that the JDS Acc has an absolute boost of 548 when we switch from the combined slot representation to the nested tuple representation This is because the subordinate relationship between the domains and the slots can be captured by the hierarchical sequence generation while this relationship is missed when generating the domain and slot together via the combined slot representation f5 shows an example of the belief state prediction result in one turn of a dialogue on the MultiWoZ test set The visualization includes the CMRD attention scores over the belief states system transcript and user utterance during the decoding stage of the slot sequence From the system attention top right since it is the first attention module and no previous context information is given it can only find the information indicating the slot departure from the system utterance under the domain condition and attend to the evidence leaving correctly during the generation step of departure From the user attention we can see that it captures the most helpful keywords that are necessary for correct prediction such as after for day and leave at to for destination Moreover during the generation step of departure the user attention successfully discerns that based on the context the word leave is not the evidence that need to be accumulated and choose to attend nothing in this step For the belief attention we can see that the belief attention module correctly attends to a previous slot for each generation step of a slot that has been presented in the previous state For the generation step of the new slot destination since the previous state does not have the destination slot the belief attention module only attends to the mark after the train domain to indicate that the generated word should belong to this domain Semiscalable Belief Tracker BIBREF1 proposed an approach that can generate fixedlength candidate sets for each of the slots from the dialogue history Although they only need to perform inference for a fixed number of values they still need to iterate over all slots defined in the ontology to make a prediction for a given dialogue turn In addition their method needs an external language understanding module to extract the exact entities from a dialogue to form candidates which will not work if the label value is an abstraction and does not have the exact match with the words in the dialogue StateNet BIBREF3 achieves stateoftheart performance with the property that its parameters are independent of the number of slot values in the candidate set and it also supports online training or inference with dynamically changing slots and values Given a slot that needs tracking it only needs to perform inference once to make the prediction for a turn but this also means that its inference time complexity is proportional to the number of slots TRADE BIBREF4 achieves stateoftheart performance on the MultiWoZ dataset by applying the copy mechanism for the value sequence generation Since TRADE takes n combinations of the domains and slots as the input the inference time complexity of TRADE is On The performance improvement achieved by TRADE is mainly due to the fact that it incorporates the copy mechanism that can boost the accuracy on the name slot which mainly needs the ability in copying names from the dialogue history However TRADE does not report its performance on the WoZ20 dataset which does not have the name slot DSTRead BIBREF6 formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the predefined combined slot Thus its inference time complexity is still On This method suffers from the fact that its generation vocabulary is limited to the words occurred in the dialogue history and it has to do a manual combination strategy with another joint state tracking model on the development set to achieve better performance Contextualized Word Embedding CWE was first proposed by BIBREF25 Based on the intuition that the meaning of a word is highly correlated with its context CWE takes the complete context sentences passages etc as the input and outputs the corresponding word vectors that are unique under the given context Recently with the success of language models eg BIBREF12 that are trained on large scale data contextualizeds word embedding have been further improved and can achieve the same performance compared to less flexible finelytuned pipelines Sequence Generation Models Recently sequence generation models have been successfully applied in the realm of multilabel classification MLC BIBREF14 Different from traditional binary relevance methods they proposed a sequence generation model for MLC tasks which takes into consideration the correlations between labels Specifically the model follows the encoderdecoder structure with an attention mechanism BIBREF26 where the decoder generates a sequence of labels Similar to language modeling tasks the decoder output at each time step will be conditioned on the previous predictions during generation Therefore the correlation between generated labels is captured by the decoder In this work we proposed the Conditional Memory Relation Network COMER the first dialogue state tracking model that has a constant inference time complexity with respect to the number of domains slots and values predefined in an ontology Besides its scalability the joint goal accuracy of our model also achieve the similar performance compared with the stateofthearts on both the MultiWoZ dataset and the WoZ dataset Due to the flexibility of our hierarchical encoderdecoder framework and the CMR decoder abundant future research direction remains as applying the transformer structure incorporating open vocabulary and copy mechanism for explicit unseen words generation and inventing better dialogue history access mechanism to accommodate efficient interturn reasoning Acknowledgements This work is partly supported by NSF 1750063 We thank all the reviewers for their constructive suggestions We also want to thank Zhuowen Tu and Shengnan Zhang for the early discussions of the project </s>
<s>[INST] Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization [/INST] Stateoftheart models for almost all popular natural language processing tasks are based on deep neural networks trained on massive amounts of data A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language and to what extent they rely on storing massive amounts of exemplars and only make local generalizations BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 This question has led to sometimes heated debates between deep learning enthusiasts that are convinced neural networks can do almost anything and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems pointing out that crucial tests distinguishing between generalization and memorization have not been applied In this paper we take a pragmatic perspective on these issues As the target for learning we use entailment relations in an artificial language defined using first order logic FOL that is unambiguously compositional We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it and focus in particular on recurrent neural networks that are unambiguously connectionist trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding and can straightforwardly be implemented in biological networks BIBREF8 or neuromorphic hardware BIBREF9 We report positive results on this challenge and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics The paper makes three main contributions First we develop a protocol for automatically generating data that can be used in entailment recognition tasks Second we demonstrate that several deep learning architectures succeed at one such task Third we present and apply a number of experiments to test whether models are capable of compositional generalization The data generation process is inspired by BIBREF13 an artificial language is defined sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic However our language is significantly more complex and instead of natural logic we use FOL Our main model is a recurrent network sketched in Figure 4 It is a socalled Siamese network because it uses the same parameters to process the left and the right sentence The upper part of the model is identical to BIBREF13 s recursive networks It consists of a comparison layer and a classification layer after which a softmax function is applied to determine the most probable target class The comparison layer takes the concatenation of two sentence vectors as input The number of cells equals the number of words so it differs per sentence Our setup resembles the Siamese architecture for learning sentence similarity of BIBREF25 and the LSTM classifier described in BIBREF18 In the diagram the dashed box indicates the location of an arbitrary recurrent unit We consider SRN BIBREF26 GRU BIBREF27 and LSTM BIBREF28 Training and testing accuracies after 50 training epochs averaged over five different model runs are shown in Table UID18 All recurrent models outperform the summing baseline Even the simplest recurrent network the SRN achieves higher training and testing accuracy scores than the treeshaped matrix model The GRU and LSTM even beat the tensor model The LSTM obtains slightly lower scores than the GRU which is unexpected given its more complex design but perhaps the current challenge does not require separate forget and input gates For more insight into the types of errors made by the bestperforming GRUbased model we refer to the confusion matrices in Appendix Error statistics The consistently higher testing accuracy provides evidence that the recurrent networks are not only capable of recognizing FOL entailment relations between unseen sentences They can also outperform the treeshaped models on this task although they do not use any of the symbolic structure that seemed to explain the success of their recursive predecessors The recurrent classifiers have learned to apply their own strategies which we will investigate in the remainder of this paper Compositionality is the ability to interpret and generate a possibly infinite number of constructions from known constituents and is commonly understood as one of the fundamental aspects of human learning and reasoning BIBREF30 BIBREF31 It has often been claimed that neural networks operate on a merely associative basis lacking the compositional capacities to develop systematicity without an abundance of training data See eg BIBREF1 BIBREF2 BIBREF32 Especially recurrent models have recently been regarded quite sceptically in this respect following the negative results established by BIBREF3 and BIBREF4 Their research suggests that recurrent networks only perform well provided that there are no systematic discrepancies between train and test data whereas human learning is robust with respect to such differences thanks to compositionality In this section we report more positive results on compositional reasoning of our Siamese networks We focus on zeroshot generalization correct classification of examples of a type that has not been observed before Provided that atomic constituents and production rules are understood compositionality does not require that abundantly many instances embodying a semantic category are observed We will consider in turn what setup is required to demonstrate zeroshot generalization to unseen lengths and to generalization to sentences composed of novel words We test if our recurrent models are capable of generalization to unseen lengths Neural models are often considered incapable of such generalization allegedly because they are limited to the training space BIBREF33 BIBREF34 BIBREF35 BIBREF36 We want to test if this is the case for the recurrent models studied in this paper The language mathcal L licenses a heavily constrained set of grammatical configurations but it does allow the sentence length to vary according to the number of included negations A perfectly compositional model should be able to interpret statements containing any number of negations on condition that it has seen an instantiation at least once at each position where this is allowed In a new experiment we train the models on pairs of sentences with length 5 7 or 8 and test on pairs of sentences with lengths 6 or 9 As before the training and test sets contain some 30000 and 5000 sentence pairs respectively Results are shown in Table UID19 All recurrent models obtain nearperfect training accuracy scores What happens on the test set is interesting It turns out that the GRU and LSTM can generalize from lengths 5 7 and 8 to 6 and 9 very well while the SRN faces serious difficulties It seems that training on lengths 5 7 and 8 and thereby skipping length 6 enables the GRU and LSTM to generalize to unseen sentence lengths 6 and 9 Training on lengths 57 and testing on lengths 89 yields low test scores for all models The GRU and LSTM gates appear to play a crucial role because the results show that the SRN does not have this capacity at all In the next experiment we assess whether our GRUbased model which performed best in the preceding experiments is capable of zeroshot generalization to sentences with novel words The current setup cannot deal with unknown words so instead of randomly initializing an embedding matrix that is updated during training we use pretrained 50dimensional GloVe embeddings BIBREF37 that are kept constant Using GloVe embeddings the GRU model obtains a mean training accuracy of 1000 and a testing accuracy of 959 averaged over five runs The bestperforming model with 1000 training and 971 testing accuracy is used in the following zeroshot experiments One of the most basic relations on the level of lexical semantics is synonymy which holds between words with equivalent meanings In the language mathcal L a word can be substituted with one of its synonyms without altering the entailment relation assigned to the sentence pairs that contain it If the GRU manages to perform well on such a modified data set after receiving the pretrained GloVe embedding of the unseen word this is a first piece of evidence for its zeroshot generalization skills We test this for several pairs of synonymous words The bestperforming GRU is first evaluated with respect to the fragment of the test data containing the original word w and consequently with respect to that same fragment after replacing the original word with its synonym sw The pairs of words the cosine distance cosdistwsw between their GloVe embeddings and the obtained results are listed in Table 6 For the first three examples in Table 6 substitution only decreases testing accuracy by a few percentage points Apparently the word embeddings of the synonyms encode the lexical properties that the GRU needs to recognize that the same entailment relations apply to the sentence pairs This does not prove that the model has distilled essential information about hyponymy from the GloVe embeddings It could also be that the word embeddings of the replacement words are geometrically very similar to the originals so that it is an algebraic necessity that the same results arise However this suspicion is inconsistent with the result of changing hate into detest The cosine distance between these words is 056 so according to this measure their vectors are more similar than those representing love and adore which have a cosine distance of 057 Nonetheless replacing hate with detest confuses the model whereas substitution of love into adore only decreases testing accuracy by 45 percentage points This illustrates that robustness of the GRU in this respect is not a matter of simple vector similarity In those cases where substitution into synonyms does not confuse the model it must have recognized a nontrivial property of the new word embedding that licenses particular inferences In our next experiment we replace a word not by its synonym but by a word that has the same semantics in the context of artificial language mathcal L We thus consider pairs of words that can be substituted with each other without affecting the entailment relation between any pair of sentences in which they feature We call such terms ontological twins Technically if odot is an arbitrary lexical entailment relation and mathcal O is an ontology then w and v are ontological twins if and only if w v in mathcal O and for all u in mathcal O if u notin lbrace wv rbrace then w odot u Leftrightarrow v odot u This trivially applies to selfidentical terms or synonyms but in the strictly defined hierarchy of mathcal L it is also the case for pairs of terms odot 0 that maintain the same lexical entailment relations to all other terms in the taxonomy Examples of ontological twins in the taxonomy of nouns mathcal Nmathcal L are Romans and Venetians This can easily be verified in the Venn diagram of Figure 1 by replacing Romans with Venetians and observing that the same hierarchy applies The same holds for eg Germans and Polish or for children and students For several such wordtwin pairs the GRU is evaluated with respect to the fragment of the test data containing the original word w and with respect to that same fragment after replacing the original word with ontological twin tw Results are shown in Table 7 The examples in Table 7 suggest that the bestperforming GRU is largely robust with respect to substitution into ontological twins Replacing Romans with other urban Italian demonyms hardly affects model accuracy on the modified fragment of the test data As before there appears to be no correlation with vector similarity because the cosine distance between the different twin pairs has a much higher variation than the corresponding accuracy scores Germans can be changed into Polish without significant deterioration but substitution with Dutch greatly decreases testing accuracy The situation is even worse for Spanish Again cosine similarity provides no explanation Spanish is still closer to Germans than Neapolitans to Romans Rather the accuracy appears to be negatively correlated with the geographical distance between the national demonyms After replacing children with students women or linguists testing scores are still decent So far we replaced individual words in order to assess whether the GRU can generalize from the vocabulary to new notions that have comparable semantics in the context of this entailment recognition task The examples have illustrated that the model tends to do this quite well In the last zeroshot learning experiment we replace sets of nouns instead of single words in order to assess the flexibility of the relational semantics that our networks have learned Formally the replacement can be regarded as a function r mapping words w to substitutes rw Not all items have to be replaced For an ontology mathcal O the function r must be such that for any w v in mathcal O and lexical entailment relation odot w odot v Leftrightarrow rw odot rv The result of applying r can be called an alternative hierarchy An example of an alternative hierarchy is the result of the replacement function r1 that maps Romans to Parisians and Italians to French Performing this substitution in the Venn diagram of Figure 1 shows that the taxonomy remains structurally intact The bestperforming GRU is evaluated on the fragment of the test data containing Romans or Italians and consequently on the same fragment after implementing replacement r1 and providing the model with the GloVe embeddings of the unseen words Replacement r1 is incrementally modified up until replacement r4 which substitutes all nouns in mathcal Nmathcal L The results of applying r1 to r4 are shown in Table 8 The results are positive the GRU obtains 867 accuracy even after applying r4 which substitutes the entire ontology mathcal Nmathcal L so that no previously encountered nouns are present in the test set anymore although the sentences remain thematically somewhat similar to the original sentences Testing scores are above 87 for the intermediate substitutions r1 to r3 This outcome clearly shows that the classifier does not depend on a strongly customized word vector distribution in order to recognize higherlevel entailment relations Even if all nouns are replaced by alternatives with embeddings that have not been witnessed or optimized beforehand the model obtains a high testing accuracy This establishes obvious compositional capacities because familiarity with structure and information about lexical semantics in the form of word embeddings are enough for the model to accommodate configurations of unseen words What happens when we consider ontologies that have the same structure but are thematically very different from the original ontology Three such alternative hierarchies are considered ranimals rreligion and rAmerica Each of these functions relocalizes the noun ontology in a totally different domain of discourse as indicated by their names Table 9 specifies the functions and their effect Testing accuracy decreases drastically which indicates that the model is sensitive to the changing topic Variation between the scores obtained after the three transformations is limited Although they are much lower than before they are still far above chance level for a sevenclass problem This suggests that the model is not at a complete loss as to the alternative noun hierarchies Possibly including a few relevant instances during training could already improve the results We established that our Siamese recurrent networks with SRN GRU or LSTM cells are able to recognize logical entailment relations without any a priori cues about syntax or semantics of the input expressions Indeed some of the recurrent setups even outperform treeshaped networks whose topology is specifically designed to deal with such tasks This indicates that recurrent networks can develop representations that can adequately process a formal language with a nontrivial hierarchical structure The formal language we defined did not exploit the full expressive power of firstorder predicate logic nevertheless by using standard firstorder predicate logic a standard theorem prover and a setup where the training set only covers a tiny fraction of the space of possible logical expressions our experiments avoid the problems observed in earlier attempts to demonstrate logical reasoning in recurrent networks The experiments performed in the last few sections moreover show that the GRU and LSTM architectures exhibit at least basic forms of compositional generalization In particular the results of the zeroshot generalization experiments with novel lengths and novel words cannot be explained with a memorizeandinterpolate account ie an account of the working of deep neural networks that assumes all they do is store enormous training sets and generalize only locally These results are relevant pieces of evidence in the decadeslong debate on whether or not connectionist networks are fundamentally able to learn compositional solutions Although we do not have the illusion that our work will put this debate to an end we hope that it will help bring deep learning enthusiasts and skeptics a small step closer </s>
<s>[INST] A Simple Method for Commonsense Reasoning [/INST] Although deep neural networks have achieved remarkable successes eg BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 BIBREF11 BIBREF12 BIBREF13 BIBREF14 their dependence on supervised learning has been challenged as a significant weakness This dependence prevents deep neural networks from being applied to problems where labeled data is scarce An example of such problems is common sense reasoning such as the Winograd Schema Challenge BIBREF0 where the labeled set is typically very small on the order of hundreds of examples Below is an example question from this dataset Although it is straightforward for us to choose the answer to be the trophy according to our common sense answering this type of question is a great challenge for machines because there is no training data or very little of it In this paper we present a surprisingly simple method for common sense reasoning with Winograd schema multiple choice questions Key to our method is th e use of language models LMs trained on a large amount of unlabeled data to score multiple choice questions posed by the challenge and similar datasets More concretely in the above example we will first substitute the pronoun it with the candidates the trophy and the suitcase and then use LMs to compute the probability of the two resulting sentences The trophy doesnt fit in the suitcase because the trophy is too big and The trophy doesnt fit in the suitcase because the suitcase is too big The substitution that results in a more probable sentence will be the correct answer A unique feature of Winograd Schema questions is the presence of a special word that decides the correct reference choice In the above example big is this special word When big is replaced by small the correct answer switches to the suitcase Although detecting this feature is not part of the challenge further analysis shows that our system successfully discovers this special word to make its decisions in many cases indicating a good grasp of commonsense knowledge Unsupervised learning has been used to discover simple commonsense relationships For example Mikolov et al BIBREF15 BIBREF16 show that by learning to predict adjacent words in a sentence word vectors can be used to answer analogy questions such as ManKingWoman Our work uses a similar intuition that language modeling can naturally capture common sense knowledge The difference is that Winograd Schema questions require more contextual information hence our use of LMs instead of just word vectors Neural LMs have also been applied successfully to improve downstream applications BIBREF17 BIBREF18 BIBREF19 BIBREF20 In BIBREF17 BIBREF18 BIBREF19 BIBREF20 researchers have shown that pretrained LMs can be used as feature representations for a sentence or a paragraph to improve NLP applications such as document classification machine translation question answering etc The combined evidence suggests that LMs trained on a massive amount of unlabeled data can capture many aspects of natural language and the worlds knowledge especially commonsense information Previous attempts on solving the Winograd Schema Challenge usually involve heavy utilization of annotated knowledge bases rulebased reasoning or handcrafted features BIBREF21 BIBREF22 BIBREF23 In particular Rahman and Ng BIBREF24 employ human annotators to build more supervised training data Their model utilizes nearly 70K handcrafted features including querying data from Google Search API Sharma et al BIBREF25 rely on a semantic parser to understand the question query texts through Google Search and reason on the graph produced by the parser Similarly Schller BIBREF23 formalizes the knowledgegraph data structure and a reasoning process based on cognitive linguistics theories Bailey et al BIBREF22 introduces a framework for reasoning using expensive annotated knowledge bases as axioms The current best approach makes use of the skipgram model to learn word representations BIBREF26 The model incorporates several knowledge bases to regularize its training process resulting in Knowledge Enhanced Embeddings KEE A semantic similarity scorer and a deep neural network classifier are then combined on top of KEE to predict the answers The final system therefore includes both supervised and unsupervised models besides three different knowledge bases In contrast our unsupervised method is simpler while having significantly higher accuracy Unsupervised training is done on text corpora which can be cheaply curated Using language models in reading comprehension tests also produced many great successes Namely Chu et al BIBREF27 used bidirectional RNNs to predict the last word of a passage in the LAMBADA challenge Similarly LMs are also used to produce features for a classifier in the Store Close Test 2017 giving best accuracy against other methods BIBREF28 In a broader context LMs are used to produce good word embeddings significantly improved a wide variety of downstream tasks including the general problem of question answering BIBREF19 BIBREF29 We first substitute the pronoun in the original sentence with each of the candidate choices The problem of coreference resolution then reduces to identifying which substitution results in a more probable sentence By reframing the problem this way language modeling becomes a natural solution by its definition Namely LMs are trained on text corpora which encodes human knowledge in the form of natural language During inference LMs are able to assign probability to any given text based on what they have learned from training data An overview of our method is shown in Figure 1 Suppose the sentence S of n consecutive words has its pronoun to be resolved specified at the kth position S lbrace w1 wk1 wk equiv p wk1 wnrbrace We make use of a trained language model Ptheta wt w1 w2 wt1 which defines the probability of word wt conditioned on the previous words w1 wt1 The substitution of a candidate reference c in to the pronoun position k results in a new sentence Swkleftarrow c we use notation n0 to mean that word n1 is substituted by candidate n2 We consider two different ways of scoring the substitution which scores how probable the resulting full sentence is and which scores how probable the part of the resulting sentence following c is given its antecedent In other words it only scores a part Swkleftarrow c conditioned on the rest of the substituted sentence An example of these two scores is shown in Table 1 In our experiments we find that partial scoring strategy is generally better than the naive full scoring strategy In this section we describe tests for commonsense reasoning and the LMs used to solve these tasks We also detail training text corpora used in our experiments Our experiments start with testing LMs trained on all text corpora with PDP60 and WSC273 Next we show that it is possible to customize training data to obtain even better results We first examine unsupervised singlemodel resolvers on PDP60 by training one characterlevel and one wordlevel LM on the Gutenberg corpus In Table 2 these two resolvers outperform previous results by a large margin For this task we found full scoring gives better results than partial scoring In Section Partial scoring is better than full scoring we provide evidences that this is an atypical case due to the very small size of PDP60 Next we allow systems to take in necessary components to maximize their test performance This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer Here we simply train another three variants of LMs on LM1Billion CommonCrawl and SQuAD and ensemble all of them As reported in Table 3 this ensemble of five unsupervised models outperform the best system in the 2016 competition 583 by a large margin Specifically we achieve 700 accuracy better than the more recent reported results from Quan Liu et al 667 BIBREF26 who makes use of three knowledge bases and a supervised deep neural network On the harder task WSC273 our singlemodel resolvers also outperform the current stateoftheart by a large margin as shown in Table 4 Namely our wordlevel resolver achieves an accuracy of 564 By training another 4 LMs each on one of the 4 text corpora LM1Billion CommonCrawl SQuAD Gutenberg Books and add to the previous ensemble we are able to reach 615 nearly 10 of accuracy above the previous best result This is a drastic improvement considering this previous best system outperforms random guess by only 3 in accuracy This task is more difficult than PDP60 First the overall performance of all competing systems are much lower than that of PDP60 Second incorporating supervised learning and expensive annotated knowledge bases to USSM provides insignificant gain this time 3 comparing to the large gain on PDP60 19 As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 BIBREF25 we also explore using this option Namely we build a customized text corpus based on questions in commonsense reasoning tasks It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers In particular we aggregate documents from the CommonCrawl dataset that has the most overlapping ngrams with the questions The score for each document is a weighted sum of F1n scores when counting overlapping ngrams SimilarityScoredocument fracsum n14nF1nsum n14n The top 01 of highest ranked documents is chosen as our new training corpus Details of the ranking is shown in Figure 2 This procedure resulted in nearly 1000000 documents with the highest ranking document having a score of 8times 102 still relatively small to a perfect score of 10 We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events We train four different LMs on STORIES and add them to the previous ensemble of 10 LMs resulting in a gain of 2 accuracy in the final system as shown in Table 5 Remarkably single models trained on this corpus are already extremely strong with a wordlevel LM achieving 626 accuracy even better than the ensemble of 10 models previously trained on 4 other text corpora 615 We introduce a method to potentially detect keywords at which our proposed resolvers make decision between the two candidates ccorrect and cincorrect Namely we look at the following ratio qt fracPtheta wt w1 w2 wt1 wk leftarrow ccorrectPtheta wt w1 w2 wt1 wk leftarrow cincorrect Where 1 le t le n for full scoring and k 1 le t le n for partial scoring It follows that the choice between ccorrect or cincorrect is made by the value of Q prod tqt being bigger than 10 or not By looking at the value of each individual qt it is possible to retrieve words with the largest values of qt and hence most responsible for the final value of Q We visualize the probability ratios qt to have more insights into the decisions of our resolvers Figure 3 displays a sample of incorrect decisions made by full scoring and is corrected by partial scoring Interestingly we found qt with large values coincides with the special keyword of each Winograd Schema in several cases Intuitively this means the LMs assigned very low probability for the keyword after observing the wrong substitution It follows that we can predict the keyword in each the Winograd Schema question by selecting top word positions with the highest value of qt For questions with keyword appearing before the reference we detect them by backwardscoring models Namely we ensemble 6 LMs each trained on one text corpora with word order reversed This ensemble also outperforms the previous best system on WSC273 with a remarkable accuracy of 582 Overall we are able to discover a significant amount of special keywords 115 out of 178 correctly answered questions as shown in Table 6 This strongly indicates a correct understanding of the context and a good grasp of commonsense knowledge in the resolvers decision process In this set of experiments we look at wrong predictions from a wordlevel LM With full scoring strategy we observe that qt at the pronoun position is most responsible for a very large percentage of incorrect decisions as shown in Figfure 3 and Table 7 For example with the test The trophy cannot fit in the suitcase because it is too big the system might return cincorrect suitcase simply because ccorrect trophy is a very rare word in its training corpus and therefore is assigned a very low probability overpowering subsequent qt values Following this reasoning we apply a simple fix to full scoring by normalizing its score with the unigram count of c Scorefullnormalized Scorefull Countc Partial scoring on the other hand disregards c altogether As shown in Figure 4 this normalization fixes full scoring in 9 out of 10 tested LMs on PDP122 On WSC273 the result is very decisive as partial scoring strongly outperforms the other two scoring in all cases Since PDP122 is a larger superset of PDP60 we attribute the different behaviour observed on PDP60 as an atypical case due to its very small size In this set of experiments we examine the effect of training data on commonsense reasoning test performance Namely we train both wordlevel and characterlevel LMs on each of the five corpora LM1Billion CommonCrawl SQuAD Gutenberg Books and STORIES A heldout dataset from each text corpus is used for early stopping on the corresponding training data To speed up training on these large corpora we first train the models on the LM1Billion text corpus Each trained model is then divided into three groups of parameters Embedding Recurrent Cell and Softmax Each of the three is optionally transferred to train the same architectures on CommonCrawl SQuAD and Gutenberg Books The best transferring combination is chosen by crossvalidation Figure 5 left and middle show that STORIES always yield the highest accuracy for both types of input processing We next rank the text corpora based on ensemble performance for more reliable results Namely we compare the previous ensemble of 10 models against the same set of models trained on each single text corpus This time the original ensemble trained on a diverse set of text corpora outperforms all other singlecorpus ensembles including STORIES This highlights the important role of diversity in training data for commonsense reasoning accuracy of the final system We introduce a simple unsupervised method for Commonsense Reasoning tasks Key to our proposal are large language models trained on a number of massive and diverse text corpora The resulting systems outperform previous best systems on both Pronoun Disambiguation Problems and Winograd Schema Challenge Remarkably on the later benchmark we are able to achieve 637 accuracy comparing to 528 accuracy of the previous stateoftheart who utilizes supervised learning and expensively annotated knowledge bases We analyze our systems answers and observe that it discovers key features of the question that decides the correct answer indicating good understanding of the context and commonsense knowledge We also demonstrated that ensembles of models benefit the most when trained on a diverse set of text corpora We anticipate that this simple technique will be a strong building block for future systems that utilize reasoning ability on commonsense knowledge The base model consists of two layers of LongShort Term Memory LSTM BIBREF31 with 8192 hidden units The output gate of each LSTM uses peepholes and a projection layer to reduce its output dimensionality to 1024 We perform dropout on LSTMs outputs with probability 025 For word inputs we use an embedding lookup of 800000 words each with dimension 1024 For character inputs we use an embedding lookup of 256 characters each with dimension 16 We concatenate all characters in each word into a tensor of shape word length 16 and add to its two ends the begin of word and end of word tokens The resulting concatenation is zeropadded to produce a fixed size tensor of shape 50 16 This tensor is then processed by eight different 1D convolution Conv kernels of different sizes and number of output channels listed in Table 8 each followed by a ReLU acitvation The output of all CNNs are then concatenated and processed by two other fullyconnected layers with highway connection that persist the input dimensionality The resulting tensor is projected down to a 1024feature vector For both word input and character input we perform dropout on the tensors that go into LSTM layers with probability 025 We use a single fullyconnected layer followed by a Softmax operator to process the LSTMs output and produce a distribution over word vocabulary of size 800K During training LM loss is evaluated using importance sampling with negative sample size of 8192 This loss is minimized using the AdaGrad BIBREF37 algorithm with a learning rate of 02 All gradients on LSTM parameters and Character Embedding parameters are clipped by their global norm at 10 To avoid storing large matrices in memory we shard them into 32 equalsized smaller pieces In our experiments we used 8 different variants of this base model as listed in Table 9 In Table 10 we listed all LMs and their training text corpora used in each of the experiments in Section Main results Using the similarity scoring technique in section Customized training data for Winograd Schema Challenge we observe a large amount of low quality training text on the lower end of the ranking Namely these are documents whose content are mostly unintelligible or unrecognized by our vocabulary Training LMs for commonsense reasoning tasks on full CommonCrawl therefore might not be ideal On the other hand we detected and removed a portion of PDP122 questions presented as an extremely high ranked document </s>
<s>[INST] Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology [/INST] One of the biggest challenges faced by modern natural language processing NLP systems is the inadvertent replication or amplification of societal biases This is because NLP systems depend on language corpora which are inherently not objective they are creations of human design BIBREF0 One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 BIBREF2 BIBREF3 Gender stereotypes can manifest in language in overt ways For example the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering Consequently any NLP system that is trained such a corpus will likely learn to associate engineer with men but not with women BIBREF4 To date the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 BIBREF6 BIBREF7 Yet gender stereotypes also exist in other languages because they are a function of society not of grammar Moreover because English does not mark grammatical gender approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 In these languages the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns This means that if the gender of one word changes the others have to be updated to match As a result simple heuristics such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 will yield ungrammatical sentences Consider the Spanish phrase el ingeniero experto the skilled engineer Replacing ingeniero with ingeniera is insufficientel must also be replaced with la and experto with experta In this paper we present a new approach to counterfactual data augmentation BIBREF10 for mitigating gender stereotypes associated with animate nouns ie nouns that represent people for morphologically rich languages We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns We use this model as part of a fourstep process depicted in figpipeline to reinflect entire sentences following an intervention on the grammatical gender of one word We intrinsically evaluate our approach using Spanish and Hebrew achieving taglevel INLINEFORM0 scores of 83 and 72 and formlevel accuracies of 90 and 87 respectively We also conduct an extrinsic evaluation using four languages Following DBLPjournalscorrabs180711714 we show that on average our approach reduces gender stereotyping in neural language models by a factor of 25 without sacrificing grammaticality Men and women are mentioned at different rates in text BIBREF11 This problem is exacerbated in certain contexts For example the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus such as giving preference to male engineers over female engineers in an automated resum filtering system Gender stereotypes of this sort have been observed in word embeddings BIBREF5 BIBREF3 contextual word embeddings BIBREF12 and coreference resolution systems BIBREF13 BIBREF9 inter alia In this section we present a Markov random field BIBREF17 for morphosyntactic agreement This model defines a joint distribution over sequences of morphosyntactic tags conditioned on a labeled dependency tree with associated partofspeech tags Given an intervention on a gendered word we can use this model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement A dependency tree for a sentence see figtree for an example is a set of ordered triples INLINEFORM0 where INLINEFORM1 and INLINEFORM2 are positions in the sentence or a distinguished root symbol and INLINEFORM3 is the label of the edge INLINEFORM4 in the tree each position occurs exactly once as the first element in a triple Each dependency tree INLINEFORM5 is associated with a sequence of morphosyntactic tags INLINEFORM6 and a sequence of partofspeech POS tags INLINEFORM7 For example the tags INLINEFORM8 and INLINEFORM9 for ingeniero are INLINEFORM10 and INLINEFORM11 respectively because ingeniero is a masculine singular noun For notational simplicity we define INLINEFORM12 to be the set of all length INLINEFORM13 sequences of morphosyntactic tags We define the probability of INLINEFORM0 given INLINEFORM1 and INLINEFORM2 as DISPLAYFORM0 where the binary factor INLINEFORM0 scores how well the morphosyntactic tags INLINEFORM1 and INLINEFORM2 agree given the POS tags INLINEFORM3 and INLINEFORM4 and the label INLINEFORM5 For example consider the INLINEFORM6 adjectival modifier edge from experto to ingeniero in figtree The factor INLINEFORM7 returns a high score if the corresponding morphosyntactic tags agree in gender and number eg INLINEFORM8 and INLINEFORM9 and a low score if they do not eg INLINEFORM10 and INLINEFORM11 The unary factor INLINEFORM12 scores a morphosyntactic tag INLINEFORM13 outside the context of the dependency tree As we explain in secconstraint we use these unary factors to force or disallow particular tags when performing an intervention we do not learn them eqdist is normalized by the following partition function INLINEFORM14 INLINEFORM0 can be calculated using belief propagation we provide the update equations that we use in secbp Our model is depicted in figfg It is noteworthy that this model is delexicalizedie it considers only the labeled dependency tree and the POS tags not the actual words themselves We consider a linear parameterization and a neural parameterization of the binary factor INLINEFORM0 We define a matrix INLINEFORM0 for each triple INLINEFORM1 where INLINEFORM2 is the number of morphosyntactic subtags For example INLINEFORM3 has two subtags INLINEFORM4 and INLINEFORM5 We then define INLINEFORM6 as follows INLINEFORM7 where INLINEFORM0 is a multihot encoding of INLINEFORM1 As an alternative we also define a neural parameterization of INLINEFORM0 to allow parameter sharing among edges with different parts of speech and labels INLINEFORM1 where INLINEFORM0 INLINEFORM1 and INLINEFORM2 and INLINEFORM3 define the structure of the neural parameterization and each INLINEFORM4 is an embedding function We use the unary factors only to force or disallow particular tags when performing an intervention Specifically we define DISPLAYFORM0 where INLINEFORM0 is a strength parameter that determines the extent to which INLINEFORM1 should remain unchanged following an intervention In the limit as INLINEFORM2 all tags will remain unchanged except for the tag directly involved in the intervention Because our MRF is acyclic and treeshaped we can use belief propagation BIBREF18 to perform exact inference The algorithm is a generalization of the forwardbackward algorithm for hidden Markov models BIBREF19 Specifically we pass messages from the leaves to the root and vice versa The marginal distribution of a node is the pointwise product of all its incoming messages the partition function INLINEFORM0 is the sum of any nodes marginal distribution Computing INLINEFORM1 takes polynomial time BIBREF18 specifically INLINEFORM2 where INLINEFORM3 is the number of morphosyntactic tags Finally inferring the highestprobability morphosyntactic tag sequence INLINEFORM4 given INLINEFORM5 and INLINEFORM6 can be performed using the maxproduct modification to belief propagation We use gradientbased optimization We treat the negative loglikelihood INLINEFORM0 as the loss function for tree INLINEFORM1 and compute its gradient using automatic differentiation BIBREF20 We learn the parameters of secparam by optimizing the negative loglikelihood using gradient descent As explained in secgender our goal is to transform sentences like sentmsc to sentfem by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement For example if we change the morphosyntactic tag for ingeniero from mscsg to femsg then we must also update the tags for el and experto but do not need to update the tag for es which should remain unchanged as in pr sg If we intervene on the INLINEFORM0 word in a sentence changing its tag from INLINEFORM1 to INLINEFORM2 then using our model to infer the manner in which the remaining tags must be updated means using INLINEFORM3 to identify highprobability tags for the remaining words Crucially we wish to change as little as possible when intervening on a gendered word The unary factors INLINEFORM0 enable us to do exactly this As described in the previous section the strength parameter INLINEFORM1 determines the extent to which INLINEFORM2 should remain unchanged following an interventionthe larger the value the less likely it is that INLINEFORM3 will be changed Once the new tags have been inferred the final step is to reinflect the lemmata to their new forms This task has received considerable attention from the NLP community BIBREF21 BIBREF22 We use the inflection model of D181473 This model conditions on the lemma INLINEFORM0 and morphosyntactic tag INLINEFORM1 to form a distribution over possible inflections For example given experto and INLINEFORM2 the trained inflection model will assign a high probability to expertas We provide accuracies for the trained inflection model in tabreinflect We used the Adam optimizer BIBREF23 to train both parameterizations of our model until the change in devloss was less than INLINEFORM0 bits We set INLINEFORM1 without tuning and chose a learning rate of INLINEFORM2 and weight decay factor of INLINEFORM3 after tuning We tuned INLINEFORM4 in the set INLINEFORM5 and chose INLINEFORM6 For the neural parameterization we set INLINEFORM7 and INLINEFORM8 without any tuning Finally we trained the inflection model using only gendered words We evaluate our approach both intrinsically and extrinsically For the intrinsic evaluation we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections For the extrinsic evaluation we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models To the best of our knowledge this task has not been studied previously As a result there is no existing annotated corpus of paired sentences that can be used as ground truth We therefore annotated Spanish and Hebrew sentences ourselves with annotations made by native speakers of each language Specifically for each language we extracted sentences containing animate nouns from that languages UD treebank The average length of these extracted sentences was 37 words We then manually inspected each sentence intervening on the gender of the animate noun and reinflecting the sentence accordingly We chose Spanish and Hebrew because gender agreement operates differently in each language We provide corpus statistics for both languages in the top two rows of tabdata We created a hardcoded INLINEFORM0 to serve as a baseline for each language For Spanish we only activated ie set to a number greater than zero values that relate adjectives and determiners to nouns for Hebrew we only activated values that relate adjectives and verbs to nouns We created two separate baselines because gender agreement operates differently in each language To evaluate our approach we held all morphosyntactic subtags fixed except for gender For each annotated sentence we intervened on the gender of the animate noun We then used our model to infer which of the remaining tags should be updated updating a tag means swapping the gender subtag because all morphosyntactic subtags were held fixed except for gender and reinflected the lemmata Finally we used the annotations to compute the taglevel INLINEFORM0 score and the formlevel accuracy excluding the animate nouns on which we intervened We present the results in tabintrinsic Recall is consistently significantly lower than precision As expected the baselines have the highest precision though not by much This is because they reflect wellknown rules for each language That said they have lower recall than our approach because they fail to capture more subtle relationships For both languages our approach struggles with conjunctions For example consider the phrase l es un ingeniero y escritor he is an engineer and a writer Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora This is because two nouns do not normally need to have the same gender when they are conjoined Moreover our MRF does not include coreference information so it cannot tell that in this case both nouns refer to the same person Note that including coreference information in our MRF would create cycles and inference would no longer be exact Additionally the lack of coreference information means that for Spanish our approach fails to convert nouns that are nounmodifiers or indirect objects of verbs Somewhat surprisingly the neural parameterization does not outperform the linear parameterization We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels however this parameter sharing does not seem to make a difference in practice so the linear parameterization is sufficient We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping Following DBLPjournalscorrabs180711714 focus on neural language models We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages As our measure of gender stereotyping we compare the log ratio of the prefix probabilities under a language model INLINEFORM0 for gendered animate nouns such as ingeniero combined with four adjectives good bad smart and beautiful The translations we use for these adjectives are given in sectranslation We chose the first two adjectives because they should be used equally to describe men and women and the latter two because we expect that they will reveal gender stereotypes For example consider DISPLAYFORM0 If this log ratio is close to 0 then the language model is as likely to generate sentences that start with el ingeniero bueno the good male engineer as it is to generate sentences that start with la ingeniera bueno the good female engineer If the log ratio is negative then the language model is more likely to generate the feminine form than the masculine form while the opposite is true if the log ratio is positive In practice given the current gender disparity in engineering we would expect the log ratio to be positive If however the language model were trained on a corpus to which our CDA approach had been applied we would then expect the log ratio to be much closer to zero Because our approach is specifically intended to yield sentences that are grammatical we additionally consider the following log ratio ie the grammatical phrase over the ungrammatical phrase DISPLAYFORM0 We trained the linear parameterization using UD treebanks for Spanish Hebrew French and Italian see tabdata For each of the four languages we parsed one million sentences from Wikipedia May 2018 dump using BIBREF24 s parser and extracted taggings and lemmata using the method of BIBREF25 We automatically extracted an animacy gazetteer from WordNet BIBREF26 and then manually filtered the output for correctness We provide the size of the languages animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in tabanim For each sentence containing a noun in our animacy gazetteer we created a copy of the sentence intervened on the noun and then used our approach to transform the sentence For sentences containing more than one animate noun we generated a separate sentence for each possible combination of genders Choosing which sentences to duplicate is a difficult task For example alemn in Spanish can refer to either a German man or the German language however we have no way of distinguishing between these two meanings without additional annotations Multilingual animacy detection BIBREF27 might help with this challenge coreference information might additionally help For each language we trained the BPERNNLM baseline openvocabulary language model of BIBREF28 using the original corpus the corpus following CDA using nave swapping of gendered words and the corpus following CDA using our approach We then computed gender stereotyping and grammaticality as described above We provide example phrases in tablm we provide a more extensive list of phrases in appqueries figbias demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus the corpus following CDA using nave swapping of gendered words and the corpus following CDA using our approach It is immediately apparent that our approch reduces gender stereotyping On average our approach reduces gender stereotyping by a factor of 25 the lowest and highest factors are 12 Ita and 50 Esp respectively We expected that nave swapping of gendered words would also reduce gender stereotyping Indeed we see that this simple heuristic reduces gender stereotyping for some but not all of the languages For Spanish we also examine specific words that are stereotyped toward men or women We define a word to be stereotyped toward one gender if 75 of its occurrences are of that gender figespbias suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women The grammaticality of the corpora following CDA differs between languages That said with the exception of Hebrew our approach either sacrifices less grammaticality than nave swapping of gendered words and sometimes increases grammaticality over the original corpus Given that we know the model did not perform as accurately for Hebrew see tabintrinsic this finding is not surprising In contrast to previous work we focus on mitigating gender stereotypes in languages with rich morphologyspecifically languages that exhibit gender agreement To date the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English For example BIBREF5 proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings BIBREF10 studied gender stereotypes in language models and BIBREF13 introduced a novel Winograd schema for evaluating gender stereotypes in coreference resolution The most closely related work is that of BIBREF9 who used CDA to reduce gender stereotypes in coreference resolution however their approach yields ungrammatical sentences in morphologically rich languages Our approach is specifically intended to yield grammatical sentences when applied to such languages BIBREF29 also focused on morphologically rich languages specifically Arabic but in the context of gender identification in machine translation We presented a new approach for converting between masculineinflected and feminineinflected noun phrases in morphologically rich languages To do this we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morphosyntactic agreement when altering the grammatical gender of particular nouns To the best of our knowledge this task has not been studied previously As a result there is no existing annotated corpus of paired sentences that can be used as ground truth Despite this limitation we evaluated our approach both intrinsically and extrinsically achieving promising results For example we demonstrated that our approach reduces gender stereotyping in neural language models Finally we also identified avenues for future work such as the inclusion of coreference information The last author acknowledges a Facebook Fellowship Our belief propagation update equations are DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 returns the set of neighbouring nodes of node INLINEFORM1 The belief at any node is given by DISPLAYFORM0 tabfem and tabmasc contain the feminine and masculine translations of the four adjectives that we used For each noun in our animacy gazetteer we generated sixteen phrases Consider the noun engineer as an example We created four phrasesone for each translation of The good engineer The bad engineer The smart engineer and The beautiful engineer These phrases as well as their prefix loglikelihoods are provided below in tabquery </s>
<s>[INST] Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study [/INST] Humans deploy structuresensitive expectations to guide processing during natural language comprehension BIBREF0 While it has been shown that neural language models show similar structuresensitivity in their predictions about upcoming material BIBREF1 BIBREF2 previous work has focused on dependencies that are conditioned by features attached to a single word such as subject number BIBREF3 BIBREF4 or whquestion words BIBREF5 There has been no systematic investigation into models ability to compute phraselevel featuresfeatures that are attached to a set of wordsand whether models can deploy these more abstract properties to drive downstream expectations In this work we assess whether stateoftheart neural models can compute and employ phraselevel gender and number features of coordinated subject Noun Phrases CoordNPs with two nouns Typical syntactic phrases are endocentric they are headed by a single child whose features determine the agreement requirements for the entire phrase In Figure FIGREF1 for example the word star heads the subject NP The star since star is singular the verb must be singular CoordNPs lack endocentricity neither conjunct NP solely determines the features of the NP as a whole Instead these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator In Figure FIGREF1 because the coordinator is and the subject NP number is plural even though both conjuncts the star and the moon are singular As this case demonstrates the agreement behavior for CoordNPs must be driven by more abstract constituentlevel representations and cannot be reduced to features hosted on a single lexical item We use four suites of experiments to assess whether neural models are able to build up phraselevel representations of CoordNPs on the fly and deploy them to drive humanlike behavior First we present a simple control experiment to show that models can represent number and gender features of noncoordinate NPs Noncoordination Agreement Second we show that models modulate their expectations for downstream verb number based on the CoordNPs coordinating conjunction combined with the features of the coordinated nouns Simple Coordination We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity Complex Coordination The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias trained on a relatively small amount of text are able to learn finegrained and robust generalizations about the interaction between CoordNPs and local syntactic context Finally we use subjectauxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasallevel features of a downstream CoordNP Inverted Coordination Here we find that all models are insensitive to the finegrained features of this particular syntactic context Overall our results indicate that neural models can learn finegrained information about the interaction of Coordinated NPs and local syntactic context but their behavior remains unhumanlike in many key respects To determine whether stateoftheart neural architectures are capable of learning humanlike CoordNPverb agreement properties we adopt the psycholinguistics paradigm for model assessment In this paradigm the models are tested using handcrafted sentences designed to test underlying network knowledge The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training its expectations for upcoming words should qualitatively match human expectations in novel contexts For example BIBREF1 and BIBREF6 assessed how well neural models had learned the subjectverb number agreement by feeding them with the prefix The keys to the cabinet If the models predicted the grammatical continuation are over the ungrammatical continuation is they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun cabinet drives expectations about the number of the matrix verb If models are able to robustly modulate their expectations based on the internal components of the CoordNP this will provide evidence that the networks are building up a contextsensitive phraselevel representation We quantify model expectations as surprisal values Surprisal is the negative logconditional probability Sxi log 2 pxix1 dots xi1 of a sentences ith word xi given the previous words Surprisal tells us how strongly xi is expected in context and is known to correlate with human processing difficulty BIBREF7 BIBREF0 BIBREF8 In the CoordNPVerb agreement studies presented here cases where the proceeding context sets high expectation for a numberinflected verb form wi eg singular is we would expect Swi to be lower than its numbermismatched counterpart eg plural are are trained to output the probability distribution of the upcoming word given a context without explicitly representing the structure of the context BIBREF9 BIBREF10 We trained two twolayer recurrent neural language models with long shortterm memory architecture BIBREF11 on a relatively small corpus The first model referred as LSTM PTB in the following sections was trained on the sentences from Penn Treebank BIBREF12 The second model referred as LSTM FTB was trained on the sentences from French Treebank BIBREF13 We set the size of input word embedding and LSTM hidden layer of both models as 256 We also compare LSTM language models trained on large corpora We incorporate two pretrained English language models one trained on the Billion Word benchmark referred as LSTM 1B from BIBREF14 and the other trained on English Wikipedia referred as LSTM enWiki from BIBREF3 For French we trained a large LSTM language model referred as LSTM frWaC on a random subset about 4 million sentences 138 million word tokens of the frWaC dataset BIBREF15 We set the size of the input embeddings and hidden layers to 400 for the LSTM frWaC model since it is trained on a large dataset models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrasestructure parse BIBREF16 The action space consists of three possibilities open a new nonterminal node and opening bracket generate a terminal node and close a bracket To compute surprisal values for a given token we approximate Pwiw1cdots i1 by marginalizing over the mostlikely partial parses found by wordsynchronous beam search BIBREF17 jointly model the word sequence as well as the underlying syntactic structure BIBREF18 Following BIBREF19 we estimate surprisal using wordsynchronous beam search BIBREF17 We use the same hyperparameter settings as BIBREF18 The annotation schemes used to train the syntacticallysupervised models differ slightly between French and English In the PTB English CoordNPs are flat structures bearing an NP label In FTB French CoordNPs are binarybranching labeled as NPs except for the phrasal node dominating the coordinating conjunction which is labeled COORD We examine the effects of annotation schemes on model performance in Appendix SECREF8 In order to provide a baseline for following experiments here we assess whether the models tested have learned basic representations of number and gender features for noncoordinated Noun Phrases We test number agreement in English and French as well as gender agreement in French Both English and French have two grammatical number feature singular sg and plural pl French has two grammatical gender features masculine m and feminine f The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb in the case of number agreement or a following predicative adjective in the case of gender agreement Conditions are given in Table TABREF9 and Table TABREF10 We measure model behavior by computing the plural expectation or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions For gender expectation we compute a gender expectation which is Sfeminine continuation Smasculine continuation We measure surprisal at the verbs and predicative adjectives themselves The results for this experiment are in Figure FIGREF11 with the plural expectation and gender expectation on the yaxis and conditions on the xaxis For this and subsequent experiments error bars represent 95 confidence intervals for acrossitem means For number agreement all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular For gender agreement however only the LSTM frWaC shows modulation of gender expectation based on the gender of the head noun This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus In this section we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrasethe coordinated nouns as well as the coordinating conjunctionto drive downstream expectations We test number agreement in both English and French and gender agreement in French In simple subjectverb number agreement the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs CoordNPs formed by and are plural and thus require plural verbs CoordNPs formed by or allow either plural or singular verbs often with the number features of the noun linearly closest to the verb playing a more important role although this varies crosslinguistically BIBREF20 Forcedchoice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an orCoordNP is singular and plural agreement when the closest conjunct is plural In French both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22 In order to assess whether the neural models learn the basic CoordNP licensing for English we adapted 37 items from BIBREF21 along the 16 conditions outlined in Table TABREF14 Test items consist of the sentence preamble followed by either the singular or plural BE verb half the time in present tense isare and half the time in past tense waswere We measured the plural expectation following the procedure in Section SECREF3 We created 24 items using the same conditions as the English experiment to test the models trained in French using the 3rd person singular and plural form of verb aller to go va vont Within each item nouns match in gender across all conditions half the nouns are masculine half feminine The results for this experiment can be seen in Figure FIGREF12 with the results for English on the left and French on the right The results for and are on the top row or on the bottom row For all figures the yaxis shows the plural expectation or the difference in surprisal between the singular condition and the plural condition Turning first to Englishand Figure FIGREF12 all models show plural expectation the bars are significantly greater than zero in the plandpl and sgandpl conditions as expected For the plandsg condition only the LSTM enWiki and ActionLSTM are greater than zero indicating humanlike behavior For the sgandsg condition only the LSTM enWiki model shows the correct plural expectation For the Frenchand Figure FIGREF12 all models show positive plural expectation in all conditions as expected except for the LSTM FTB in the sgandsg condition Examining the results for Englishor we find that all models demonstrate humanlike expectation in the plorpl and sgorpl conditions The LSTM 1B LSTM PTB and RNNG models show zero or negative singular expectation for the plorsg conditions as expected However the LSTM enWiki and ActionLSTM models show positive plural expectation in this condition indicating that they have not learned the humanlike generalizations All models show significantly negative plural expectation in the sgorsg condition as expected In the Frenchor cases models show almost identical behavior to the and conditions except the LSTM frWaC shows smaller plural expectation when singular nouns are linearly proximal to the verb These results indicate moderate success at learning coordinate NP agreement however this success may be the result of an overly simple heuristic It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal numbergender features transferred into logprobability space with the earlier noun mattering less than the later noun A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions and be greater than zero in all cases Yet all the models tested show gradient performance based on the number of plural conjuncts In French if two nouns are coordinated with et andcoordination agreement must be masculine if there is one masculine element in the coordinate structure If the nouns are coordinated with ou orcoordination both masculine and feminine agreement is acceptable BIBREF23 BIBREF24 Although linear proximity effects have been tested for a number of languages that employ grammatical gender as in eg Slavic languages BIBREF25 there is no systematic study for French To assess whether the French neural models learned humanlike gender agreement we created 24 test items following the examples in Table TABREF16 and measured the masculine expectation In our test items the coordinated subject NP is followed by a predicative adjective which either takes on masculine or feminine gender morphology Results from the experiment can be seen in Figure FIGREF17 No models shows qualitative difference based on the coordinator and only the LSTM frWaC shows significant behavior difference between conditions Here we find positive masculine expectation in the mandm and fandm conditions and negative masculine expectation in the fandf condition as expected However in the mandf condition the masculine expectation is not significantly different from zero where we would expect it to be positive In the orcoordination conditions following our expectation masculine expectation is positive when both conjuncts are masculine and negative when both are feminine For the LSTM FTB and ActionLSTM models the masculine expectation is positive although not significantly so in all conditions consistent with results in Section SECREF3 One possible explanation for the results presented in the previous section is that the models are using a bag of features approach to plural and masculine licensing that is opaque to syntactic context Following a coordinating conjunction surrounded by nouns models simply expect the following verb to be plural proportionally to the number of plural nouns In this section we control for this potential confound by conducting two experiments In the Complex Coordination Control experiments we assess models ability to extend basic CoordNP licensing into sententiallyembedded environments where the CoordNP can serve as an embedded subject In the Complex Coordination Critical experiments we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase models are able to suppress the previouslydemonstrated expectations set up by these phrases These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features but are building up CoordNP representations on the fly conditioned on the local syntactic context Following certain sententialembedding verbs CoordNPs serve unambiguously as the subject of the verbs sentence complement and should trigger number agreement behavior in the main verb of the embedded clause similar to the behavior presented in SECREF13 To assess this we use the 37 test items in English and 24 items in French in section SECREF13 following the conditions in Table TABREF19 for number agreement testing only and coordination For gender agreement we use the same test items and conditions for and coordination in Section SECREF15 but with the Coordinated NPs embedded in a context similar to SECREF18 As before we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates Je croyais que les prix et les dpenses taient importantsimportantes I thought that thepl pricempl and thepl expensefpl were importantmplfpl I thought that the prices and the expenses were important The results for the control experiments can be seen in Figure FIGREF20 with English number agreement on the top row French number agreement in the middle row and French gender agreement on the bottom The yaxis shows either plural or masculine expectation with the various conditions along the xaxis For English number agreement we find that the models behave similarly as they do for simple coordination contexts All models show significant plural expectation when the closest noun is plural with only two models demonstrating plural expectation in the sgandsg case The French number agreement tests show similar results with all models except LSTM FTB demonstrating significant plural prediction in all cases Turning to French gender agreement only the LSTM frWaC shows sensitivity to the various conditions with positive masculine expectation in the mandm condition and negative expectation in the fandf condition as expected These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environmentsin this case to sentential embeddings Interestingly for some models such as the LSTM 1B behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence This may be because the model which has a large number of hidden states and may be extra sensitive to finegrained syntactic information carried on lexical items BIBREF2 is using the complementizer that to drive more robust expectations In order to assess whether the models strategy for CoordNPverb number agreement is sensitive to syntactic context we contrast the results presented above to those from a second critical experiment Here two coordinated nouns follow a verb that cannot take a sentential complement as in the examples given in Table TABREF23 Of the two possible continuationsare or isthe plural is only grammatically licensed when the second of the two conjuncts is plural In these cases the plural continuation may lead to a final sentence where the first noun serves as the verbs object and the second introduces a second main clause coordinated with the first as in I fixed the doors and the windows are still broken For the same reason the singularverb continuation is only licensed when the noun immediately following and is singular We created 37 test items in both English and French and calculated the plural expectation If the models were following a simple strategy to drive CoordNPverb number agreement then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment If however the models are sensitive to the licensing context we should see a contrast based solely on the number features of the second conjunct where plural expectation is positive when the second conjunct is plural and negative otherwise Experimental items for a critical gender test were created similarly as in Example SECREF22 As with plural agreement gender expectation should be driven solely by the second conjunct For the fandm and mandm conditions the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology Conversely for the mandf or fandf conditions the only grammatical continuation is one where the adjectival predicate bears feminine morphology As in SECREF13 we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations Nous avons accept les prix et les dpenses taient importantsimportantes we have accepted thepl pricempl and the expensefpl were importantmplfpl We have accepted the prices and the expenses were important The results from the critical experiments are in Figure FIGREF21 with the English number agreement on the top row French number agreement in the middle and gender expectation on the bottom row Here the yaxis shows either plural expectation or masculine expectation with the various conditions are on the xaxis The results here are strikingly different from those in the control experiments For number agreement all models in both languages show strong plural expectation in conditions where the second noun is plural blue and green bars as they do in the control experiments Crucially when the second noun is singular the plural expectation is significantly negative for all models save for the French LSTM FTB plandsg condition Turning to gender agreement only the LSTM frWaC model shows differentiation between the four conditions tested However whereas the fandm and mandf gender expectations are not significantly different from zero in the control condition in the critical condition they pattern with the purely masculine and purely feminine conditions indicating that in this syntactic context the model has successfully learned to base gender expectation solely off of the second noun These results are inconsistent with a simple bag of features strategy that is insensitive to local syntactic context They indicate that both models can interpret the same string as either a coordinated noun phrase or as an NP object and the start of a coordinated VP with the second NP as its subject In addition to using phraselevel features to drive expectation about downstream lexical items human processors can do the inverseuse lexical features to drive expectations about upcoming syntactic chunks In this experiment we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs To assess whether neural language models learn inverted coordination rules we adapted items from Section SECREF13 in both English 37 items and French 24 items following the paradigm in Table TABREF24 The first part of the phrase contains either a plural or singular verb and a plural or singular noun In this case we sample the surprisal for the continuations and or is grammatical in all conditions so it is omitted from this study Our expectation is that and is less surprising in the VplNsg condition than in the VsgNsg condition where a CoordNP is not licensed by the grammar in either French or English as in What is the pig and the cat eating We also expect lower surprisal for and in the VplNsg condition where it is obligatory for a grammatical continuation than in the VplNpl condition where it is optional For French experimental items the question is embedded into a sententialcomplement taking verb following Example SECREF6 due to the fact that unembedded subjectverb inverted questions sound very formal and might be relatively rare in the training data Je me demande o vont le maire et I myself ask where go3PL theMSG mayorMSG and The results for both languages are shown in Figure FIGREF25 with the surprisal at the coordinator on the yaxis and the various conditions on the xaxis No model in either language shows a signficant difference in surprisal between the VplNsg and VplNpl conditions or between the VplNsg and VsgNsg conditions The LSTM 1B shows significant difference between the VplNsg and VplNpl conditions but in the opposite direction than expected with the coordinator less surprising in the latter condition These results indicate that the models are unable to use the finegrained context sensitivity to drive expectations for CoordNPs at least in the inversion setting The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn longdistance syntactic dependencies in impressive detail Our results provide complimentary insights demonstrating that a range of model architectures trained on a variety of datasets can learn finegrained information about the interaction of CoordNPs and local syntactic context but their behavior remains unhumanlike in many key ways Furthermore to our best knowledge this work presents the first psycholinguistic analysis of neural language models trained on French a highresource language that has so far been underinvestigated in this line of research In the simple coordination experiment we demonstrated that models were able to capture some of the agreement behaviors of humans although their performance deviated in crucial aspects Whereas human behavior is best modeled as a percolation process the neural models appear to be using a linear combination of NP constituent number to drive CoordNPverb number agreement with the second noun weighted more heavily than the first In these experiments supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes The complex coordination experiments provided evidence that the neural models tested were not using a simple bag of features strategy but were sensitive to syntactic context All models tested were able to interpret material that had similar surface form in ways that corresponded to two different treestructural descriptions based on local context The inverted coordination experiment provided a contrasting example in which models were unable to modulate expectations based on subtleties in the syntactic environment Across all our experiments the French models performed consistently better on subjectverb number agreement than on subjectpredicate gender agreement Although there are likely more examples of subjectverb number agreement in the French training data gender agreement is syntactically mandated and widespread in French It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subjectpredicate expectations This project is supported by a grant of Labex EFL ANR10LABX0083 and Idex ANR18IDEX0001 for AA and MITIBM AI Laboratory and the MITSenseTimeAlliance on Artificial Intelligence for RPL We would like to thank the anonymous reviewers for their comments and Anne Abeill for her advice and feedback This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurallysupervised models We test whether an explicit COORD phrasal tag improves model performance We trained two additional RNNG models on 38546 sentences from the Penn Treebank annotated with two different schemes The first RNNG PTBcontrol was trained with the original Penn Treebank annotation The second RNNG PTBcoord was trained on the same sentences but with an extended coordination annotation scheme meant to employ the scheme employed in the FTB adapted from BIBREF26 We stripped empty categories from their scheme and only kept the NPCOORD label for constituents inside a coordination structure Figure FIGREF26 illustrates the detailed annotation differences between two datasets We tested both models on all the experiments presented in Sections SECREF3SECREF6 above Turning to the results of these six experiments We see little difference between the two models in the Noncoordination agreement experiment For the Complex coordination control and Complex coordination critical experiments both models are largely the same as well However in the Simple andcoordination and Simple orcoordination experiments the values for all conditions are shifted upwards for the RNNG PTBcoord model indicating higher overall preference for the plural continuation Furthermore the range of values is reduced in the RNNG PTBcoord model compared to the RNNG PTBcontrol model These results indicate that adding an explicit COORD phrasal label does not drastically change model performance Both models still appear to be using a linear combination of number features to drive plural vs singular expectation However the explicit representation has made the interior of the coordination phrase more opaque to the model each feature matters less and has slightly shifted model preference towards plural continuations In this sense the PTBcoord model may have learned a generalization about CoordNPs but this generalization remains unlike the ones learned by humans We present statistics of subjectpredicate agreement patterns in the Penn Treebank PTB and French Treebank FTB in Table TABREF28 and TABREF29 </s>
<s>[INST] Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation [/INST] Spoken dialogue systems that can help users to solve complex tasks have become an emerging research topic in artificial intelligence and natural language processing areas BIBREF0 BIBREF1 BIBREF2 BIBREF3 With a welldesigned dialogue system as an intelligent personal assistant people can accomplish certain tasks more easily via natural language interactions Today there are several virtual intelligent assistants such as Apples Siri Googles Home Microsofts Cortana and Amazons Alexa in the market A typical dialogue system pipeline can be divided into several parts a recognized result of a users speech input is fed into a natural language understanding module NLU to classify the domain along with domainspecific intents and fill in a set of slots to form a semantic frame BIBREF4 BIBREF5 BIBREF6 A dialogue state tracking DST module predicts the current state of the dialogue by means of the semantic frames extracted from multiturn conversations Then the dialogue policy determines the system action for the next step given the current dialogue state Finally the semantic frame of the system action is then fed into a natural language generation NLG module to construct a response utterance to the user BIBREF7 BIBREF8 As a key component to a dialogue system the goal of NLG is to generate natural language sentences given the semantics provided by the dialogue manager to feedback to users As the endpoint of interacting with users the quality of generated sentences is crucial for better user experience The common and mostly adopted method is the rulebased or templatebased method BIBREF9 which can ensure the natural language quality and fluency In spite of robustness and adequacy of the rulebased methods frequent repetition of identical tedious output makes talking to a templatebased machine unsatisfactory Furthermore scalability is an issue because designing sophisticated rules for a specific domain is timeconsuming BIBREF10 Recurrent neural networkbased language model RNNLM have demonstrated the capability of modeling longterm dependency in sequence prediction by leveraging recurrent structures BIBREF11 BIBREF12 Previous work proposed an RNNLMbased NLG that can be trained on any corpus of dialogue actutterance pairs without handcrafted features and any semantic alignment BIBREF13 The following work based on sequencetosequence seq2seq further obtained better performance by employing encoderdecoder structure with linguistic knowledge such as syntax trees BIBREF14 BIBREF15 BIBREF16 BIBREF17 However due to grammar complexity and lack of diction knowledge it is still challenging to generate long and complex sentences by a simple encoderdecoder structure To address the issue previous work attempted separating decoding jobs in a decoding hierarchy which is constructed in terms of partofspeech POS tags BIBREF8 The original single decoding process is separated into a multilevel decoding hierarchy where each decoding layer generates words associated with a specific POS set This paper extends the idea to a more flexible design by incorporating attention mechanisms into the decoding hierarchy Because prior work designs the decoding hierarchy in a handcrafted manner based on a subjective intuition BIBREF8 in this work we experiment on various generating hierarchies to investigate the importance of linguistic pattern ordering in hierarchical language generation The experiments show that our proposed method outperforms the classic seq2seq model with a smaller model size in addition the concept of the hierarchical decoder is proven general enough for various generating hierarchies Furthermore this paper also provides the design guidelines and insights of designing the decoding hierarchy The framework of the proposed hierarchical NLG model is illustrated in Figure FIGREF2 where the model architecture is based on an encoderdecoder seq2seq structure with attentional hierarchical decoders BIBREF14 BIBREF15 In the encoderdecoder architecture a typical generation process includes encoding and decoding phases First a given semantic representation sequence INLINEFORM0 is fed into a RNNbased encoder to capture the temporal dependency and project the input to a latent feature space the semantic representation sequence is also encoded into an onehot representation as the initial state of the encoder in order to maintain the temporalindependent condition as shown in the left part of Figure FIGREF2 The recurrent unit of the encoder is bidirectional gated recurrent unit GRU BIBREF14 DISPLAYFORM0 Then the encoded semantic vector INLINEFORM0 is fed into an RNNbased decoder as the initial state to decode word sequences as shown in the right part of Figure FIGREF2 In spite of the intuitive and elegant design of the seq2seq model it is still difficult to generate complex and decent sequences by a simple encoderdecoder structure because a single decoder is not capable of learning all diction grammar and other related linguistic knowledge at the same time Some prior work applied additional techniques such as reranker and beamsearch to select a better result among multiple generated sequences BIBREF13 BIBREF16 However it is still an unsolved issue to the NLG community Therefore we propose a hierarchical decoder to address the above issue where the core idea is to allow the decoding layers to focus on learning different types of patterns instead of learning all relevant knowledge together The hierarchical decoder is composed of several decoding layers each of which is only responsible for learning a portion of the required knowledge Namely the linguistic knowledge can be incorporated into the decoding process and divided into several subsets We use partofspeech POS tags as the additional linguistic features to construct the decoding hierarchy in this paper where POS tags of the words in the target sentence are separated into several subsets and each layer is responsible for decoding the words associated with a specific set of POS patterns An example is shown in the right part of Figure FIGREF2 where the first layer at the bottom is in charge of decoding nouns pronouns and proper nouns and the second layer is for verbs and so on The prior work manually designed the decoding hierarchy by considering the subjective intuition about how children learn to speak BIBREF8 infants first learn to say keywords which are often nouns For example when an infant says Daddy toilet it actually means Daddy I want to go to the toilet Along with the growth of the age children learn more grammars and vocabulary and then start adding verbs to the sentences further adding adverbs and so on However the handcrafted linguistic order may not be optimal so we experiment and analyze the model on various generating linguistic hierarchies to deeply investigate the effect of linguistic pattern ordering In the hierarchical decoder the initial state of each GRUbased decoding layer INLINEFORM0 is the extracted feature INLINEFORM1 from the encoder and the input at every step is the last predicted token INLINEFORM2 concatenated with the output from the previous layer INLINEFORM3 DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 th hidden state of the INLINEFORM2 th GRU decoding layer and INLINEFORM3 is the INLINEFORM4 th outputted word in the INLINEFORM5 th layer We use the cross entropy loss as our training objective for optimization where the difference between the predicted distribution and target distribution is minimized To facilitate training and improve the performance several strategies including scheduled sampling a repeat input mechanism curriculum learning and an attention mechanism are utilized Teacher forcing BIBREF18 is a strategy for training RNN that uses model output from a prior time step as an input and it works by using the expected output at the current time step INLINEFORM0 as the input at the next time step rather than the output generated by the network The teacher forcing techniques can also be triggered only with a certain probability which is known as the scheduled sampling approach BIBREF19 We adopt scheduled sampling methods in our experiments In the proposed framework an input of a decoder contains not only the output from the last step but one from the last decoding layer Therefore we design two types of scheduled sampling approaches innerlayer and interlayer Innerlayer schedule sampling is the classic teacher forcing strategy DISPLAYFORM0 Interlayer schedule sampling uses the labels instead of the actual output tokens of the last layer DISPLAYFORM0 The proposed hierarchical decoder consists of several decoding layers the expected output sequences of upper layers are longer than the ones in the lower layers The framework is suitable for applying the curriculum learning BIBREF20 of which core concept is that a curriculum of progressively harder tasks could significantly accelerate a networks training The training procedure is to train each decoding layer for some epochs from the bottommost layer to the topmost one The concept of the hierarchical decoding is to hierarchically generate the sequence gradually adding words associated with different linguistic patterns Therefore the generated sequences from the decoders become longer as the generating process proceeds to the higher decoding layers and the sequence generated by a upper layer should contain the words predicted by the lower layers To facilitate the behavior previous work designs a strategy that repeats the outputs from the last layer as inputs until the current decoding layer outputs the same token socalled the repeatinput mechanism BIBREF8 This approach offers at least two merits 1 Repeating inputs tells the decoder that the repeated tokens are important to encourage the decoder to generate them 2 If the expected output sequence of a layer is much shorter than the one of the next layer the large difference in length becomes a critical issue of the hierarchical decoder because the output sequence of a layer will be fed into the next layer With the repeatinput mechanism the impact of length difference can be mitigated In order to model the relationship between layers in a generating hierarchy we further design attention mechanisms for the hierarchical decoder The proposed attention mechanisms are contentbased which means the weights are determined based on hidden states of neural models DISPLAYFORM0 where INLINEFORM0 is the hidden state at the current step INLINEFORM1 are the hidden states from the previous decoder layer and INLINEFORM2 is a learned weight matrix At each decoding step attention values INLINEFORM3 are calculated by these methods and then used to compute the weighted sum as a context vector which is then concatenated to decoder inputs as additional information The objective of the proposed model is to optimize the conditional probability INLINEFORM0 so that the difference between the predicted distribution and the target distribution INLINEFORM1 can be minimized DISPLAYFORM0 where INLINEFORM0 is the number of samples and the labels INLINEFORM1 are the word labels Each decoder in the hierarchical NLG is trained based on curriculum learning with the objective The E2E NLG challenge dataset BIBREF21 is utilized in our experiments which is a crowdsourced dataset of 50k instances in the restaurant domain Our models are trained on the official training set and verified on the official testing set As shown in Figure FIGREF2 the inputs are semantic frames containing specific slots and corresponding values and the outputs are the associated natural language utterances with the given semantics For example a semantic frame with the slotvalue pairs nameBibimbap House foodEnglish priceRangemoderate area riverside near Clare Hall corresponds to the target sentence Bibimbap House is a moderately priced restaurant whos main cuisine is English food You will find this local gem near Clare Hall in the Riverside area The data preprocessing includes trimming punctuation marks lemmatization and turning all words into lowercase To prepare the labels of each layer within the hierarchical structure of the proposed method we utilize spaCy toolkit to perform POS tagging for the target word sequences Some properties such as names of restaurants are delexicalized for example replaced with a symbol RESTAURANTNAME to avoid data sparsity In our experiments we perform six different generating linguistic orders in which each hierarchy is constructed based on different permutations of the POS tag sets 1 nouns proper nouns and pronouns 2 verbs 3 adjectives and adverbs 4 others The probability of activating interlayer and innerlayer teacher forcing is set to 05 the probability of teacher forcing is attenuated every epoch and the decaying ratio is 09 The models are trained for 20 training epochs without early stop when curriculum learning is applied only the first layer is trained during first five epochs the second decoder layer starts to be trained at the sixth epoch and so on To evaluate the quality of the generated sequences regarding both precision and recall the evaluation metrics include BLEU and ROUGE 1 2 L scores with multiple references BIBREF22 In the experiments we borrow the idea of hierarchical decoding proposed by the previous work BIBREF8 and investigate various extensions of generating hierarchies To examine the effectiveness of hierarchical decoders we control our model size to be smaller than the baselines Specifically the decoder in the baseline seq2seq model has hidden layers of size 400 while our models with hierarchical decoders have four decoding layers of size 100 for fair comparison Table TABREF13 compares the performance between a baseline and proposed models with different generating linguistic orders For all generating hierarchies with different orders simply replacing the decoder by a hierarchical decoder achieves significant improvement in every evaluation metrics for example the topmost generating hierarchy in Table TABREF13 has 4925 improvement in BLEU 3003 in ROUGE1 9648 in ROUGE2 and 2599 in ROUGEL respectively In other words separating the generation process into several phases is proven to be a promising method Performing curriculum learning strategy offers a considerable improvement take the topmost generating hierarchy in Table TABREF13 for example this method yields a 10207 improvement in BLEU 4826 in ROUGE1 1448 in ROUGE2 and 3918 in ROUGEL Despite that applying repeatinput mechanism alone does not offer benefit combining these two strategies together further achieves the best performance Note that these methods do not require any additional parameters Unfortunately even some of the attentional hierarchical decoders achieve the best results in the generating hierarchies Table TABREF18 Mostly the additional attention mechanisms are not capable of bringing benefit for model performance The reason may be that the decoding process is designed for gradually importing words in the specific set of linguistic patterns to the output sequence each decoder layer is responsible of copying the output tokens from the previous layer and insert new words into the sequence precisely Because of this nature a decoder needs explicit information of the structure of a sentence rather than implicit highlevel latent information For instance when a decoder is trying to insert some Verb words into the output sequence knowing the position of subject and object would be very helpful The above results show that among these six different generating hierarchy the generating order 1 verbs INLINEFORM0 2 nouns proper nouns and pronouns INLINEFORM1 3 adjectives and adverbs INLINEFORM2 4 the other POS tags yields the worst performance Table TABREF23 shows that the gap of average length of target sequences between the first and the second decoder layer is the largest among all the hierarchies in average the second decoder needs to insert up to 8 words into the sequence based on 362 words from the first decoder layer in this generation process which is absolutely difficult The essence of the hierarchical design is to separate the job of the decoder into several phases if the job of each phase is balanced it is intuitive that it is more suitable for applying curriculum learning and improve the model performance The model performance is also related to linguistic structures of sentences the fifth and the sixth generating hierarchies in Table TABREF13 have very similar trends where the length of target sentences of each decoder layer is almost identical as shown in Table TABREF23 However the model performance differs a lot An adverb word could be used to modify anything but nouns and pronouns which means that the number of adverbs used for modifying verbs would be a factor to determine the generating order as well In our cases almost all adverbs in the dataset are used to describe adjectives indicating that generating verbs before inserting adverbs to sequences may not provide enough useful information instead it would possibly obstruct the model learning We can also find that in all experiments inserting adverbs before verbs would be better In summary the concept of the hierarchical decoder is simple and useful separating a difficult job to many phases is demonstrated to be a promising direction and not limited to a specific generating hierarchy Furthermore the generating linguistic orders should be determined based on the dataset and the important factors include the distribution over length of subsequences and the linguistic nature of the dataset for designing a proper generating hierarchy in NLG This paper investigates the seq2seqbased model with a hierarchical decoder that leverages various linguistic patterns The experiments on different generating linguistic orders demonstrates the generalization about the proposed hierarchical decoder which is not limited to a specific generating hierarchy However there is no universal decoding hierarchy while the main factor for designing a suitable generating order is the nature of the dataset We would like to thank reviewers for their insightful comments on the paper This work was financially supported by Ministry of Science and Technology MOST in Taiwan </s>
<s>[INST] Deep Enhanced Representation for Implicit Discourse Relation Recognition [/INST] This work is licenced under a Creative Commons Attribution 40 International Licence Licence details httpcreativecommonsorglicensesby40 Discourse parsing is a fundamental task in natural language processing NLP which determines the structure of the whole discourse and identifies the relations between discourse spans such as clauses and sentences Improving this task can be helpful to many downstream tasks such as machine translation BIBREF0 question answering BIBREF1 and so on As one of the important parts of discourse parsing implicit discourse relation recognition task is to find the relation between two spans without explicit connectives eg but so etc and needs recovering the relation from semantic understanding of texts The Penn Discourse Treebank 20 PDTB 20 BIBREF2 is a benchmark corpus for discourse relations In PDTB style the connectives can be explicit or implicit and one entry of the data is separated into Arg1 and Arg2 accompanied with a relation sense Since the release of PDTB 20 dataset many methods have been proposed ranging from traditional featurebased methods BIBREF3 BIBREF4 to latest neuralbased methods BIBREF5 BIBREF6 Especially through many neural network methods used for this task such as convolutional neural network CNN BIBREF7 recursive neural network BIBREF8 embedding improvement BIBREF9 attention mechanism BIBREF10 gate mechanism BIBREF11 multitask method BIBREF6 the performance of this task has improved a lot since it was first introduced However this task is still very challenging with the highest reported accuracy still lower than 50 due to the hardness for the machines to understand the text meaning and the relatively small task corpus In this work we focus on improving the learned representations of sentence pairs to address the implicit discourse relation recognition It is well known that text representation is the core part of stateoftheart deep learning methods for NLP tasks and improving the representation from all perspective will benefit the concerned task The representation is improved by two ways in our model through threelevel hierarchy The first way is embedding augmentation Only with informative embeddings can the final representations be better This is implemented in our wordlevel module We augment word embeddings with subwordlevel embeddings and pretrained ELMo embeddings Subwords coming from unsupervised segmentation demonstrate a better consequent performance than characters for being a better minimal representation unit The pretrained contextualized word embeddings ELMo can make the embeddings contain more contextual information which is also involved with characterlevel inputs The second way is a deep residual biattention encoder Since this task is about classifying sentence pairs the encoder is implemented in sentence and sentencepair levels A deeper model can support richer representations but is hard to train especially with a small dataset So we apply residual connections BIBREF12 to each module for facilitating signal propagation and alleviating gradient degradation The stacked encoder blocks make the single sentence representation richer and biattention module mixes two sentence representations focusingly With introducing richer and deeper representation enhancement we report the deepest model so far for the task Our representation enhanced model will be evaluated on the benchmark PDTB 20 and demonstrate stateoftheart performance to verify its effectiveness This paper is organized as follows Section 2 reviews related work Section 3 introduces our model Section 4 shows our experiments and analyses the results Section 5 concludes this work After the release of Penn Discourse Treebank 20 many works have been made to solve this concerned task linkanng2009EMNLP is the first work who considered the secondlevel classification of the task by empirically evaluating the impact of surface features Feature based methods BIBREF4 BIBREF13 BIBREF14 BIBREF15 mainly focused on using linguistic or semantic features from the discourse units or the relations between unit pairs and word pairs zhangEtAl2015EMNLP4 is the first one who modeled this task using endtoend neural network and gained great performance improvement Neural network methods also used by lots of works BIBREF16 BIBREF17 for better performance Since then a lot of methods have been proposed braud2015comparing found that word embeddings trained by neural networks is very useful to this task qinzhangzhao2016COLING augmented their system with characterlevel and contextualized embeddings Recurrent networks and convolutional networks have been used as basic blocks in many works BIBREF18 BIBREF19 BIBREF7 TACL536 used recursive neural networks Attention mechanism was used by liuli2016EMNLP2016 cai2017discourse and others wuEtAl2016EMNLP2016 and lanEtAl2017EMNLP20172 applied multitask component qinEtAl2017Long utilized adversarial nets to migrate the connectivebased features to implicit ones Sentence representation is a key component in many NLP tasks Usually better representation means better performance Plenty of work on language modeling has been done as language modeling can supply better sentence representations Since the pioneering work of Bengio2006 neural language models have been well developed BIBREF20 BIBREF21 BIBREF22 Sentence representation is directly handled in a series of work lin2017structured used self attention mechanism and used matrix to represent sentence and conneauEtAl2017EMNLP2017 used encoders pretrained on SNLI BIBREF23 and MultiNLI BIBREF24 Different from all the existing work for the first time to our best knowledge this work is devoted to an empirical study on different levels of representation enhancement for implicit discourse relation classification task Figure 1 illustrates an overview of our model which is mainly consisted of three parts wordlevel module sentencelevel module and pairlevel module Token sequences of sentence pairs Arg1 and Arg2 are encoded by wordlevel module first and every token becomes a word embedding augmented by subword and ELMo Then these embeddings are fed to sentencelevel module and processed by stacked encoder blocks CNN or RNN encoder block Every block layer outputs representation for each token Furthermore the output of each layer is processed by biattention module in the pairlevel module and concatenated to pair representation which is finally sent to classifiers which are multiple layer perceptrons MLP with softmax The model details are given in the rest of this section An inputed token sequence of length N is encoded by the wordlevel module into an embedding sequence mathbf e1 mathbf e2 mathbf e3 cdots mathbf eN For each embedded token mathbf ei it is concatenated from three parts mathbf ei mathbf eiw mathbf eis mathbf eic in mathbb Rde Eq 4 mathbf eiw in mathbb Rdw is pretrained word embedding for this token and is fixed during the training procedure Our experiments show that finetuning the embeddings slowed down the training without better performance mathbf eis in mathbb Rds is subwordlevel embedding encoded by subword encoder mathbf eic in mathbb Rdc is contextualized word embedding encoded by pretrained ELMo encoders whose parameters are also fixed during training Subword is merged from singlecharacter segmentation and the input of ELMo encoder is also character Characterlevel embeddings have been used widely in lots of works and its effectiveness is verified for outofvocabulary OOV or rare word representation However character is not a natural minimal unit for there exists word internal structure we thus introduce a subwordlevel embedding instead Subword units can be computationally discovered by unsupervised segmentation over words that are regarded as character sequences We adopt byte pair encoding BPE algorithm introduced by sennrichhaddowbirch2016P1612 for this segmentation BPE segmentation actually relies on a series of iterative merging operation over bigrams with the highest frequency The number of merging operation times is roughly equal to the result subword vocabulary size For each word the subwordlevel embedding is encoded by a subword encoder as in Figure 2 Firstly the subword sequence of length n of the word is mapped to subword embedding sequence mathbf se1 mathbf se2 mathbf se3 cdots mathbf sen after padding which is randomly initialized Then K we empirically set K 2 convolutional operations Conv1 Conv2 cdots ConvK followed by max pooling operation are applied to the embedding sequence and the sequence is padded before the convolutional operation For the i th convolution kernel Convi suppose the kernel size is ki then the output of Convi on embeddings mathbf sej to mathbf se1 mathbf se2 mathbf se3 cdots mathbf sen0 is mathbf se1 mathbf se2 mathbf se3 cdots mathbf sen1 The final output of Convi after max pooling is beginsplit
mathbf ui mathop maxpoolmathbf C1 cdots mathbf Cj cdots mathbf Cn
endsplit Finally the K outputs are concatenated
mathbf u mathbf u1 mathbf u2 cdots mathbf uK in mathbb Rds
to feed a highway network BIBREF25 mathbf g sigma mathbf Wg mathbf uT mathbf bg in mathbb Rds nonumber
mathbf eis mathbf g odot mathop ReLUmathbf Wh mathbf uT mathbf bh
mathbf 1 mathbf g odot mathbf u nonumber
in mathbb Rds Eq 6 where mathbf g denotes the gate and mathbf Wg in mathbb Rds times ds mathbf bg in mathbb Rds
mathbf Wh in mathbb Rds times ds mathbf bh in mathbb Rds are parameters odot is elementwise multiplication The above Eq 6 gives the subwordlevel embedding for the i th word ELMo Embeddings from Language Models BIBREF26 is a pretrained contextualized word embeddings involving characterlevel representation It is shown useful in some works BIBREF27 BIBREF28 This embedding is trained by bidirectional language models on large corpus using character sequence for each word token as input The ELMo encoder employs CNN and highway networks over characters whose output is given to a multiplelayer biLSTM with residual connections Then the output is contextualized embeddings for each word It is also can be seen as a hybrid encoder for character word and sentence This encoder can add lots of contextual information to each word and can ease the semantics learning of the model For the pretrained ELMo encoder the output is the result of the last two biLSTM layers Suppose mathbf ci is the character sequence of i th word in a sentence then the encoder output is
cdots mathbf hi0 cdots cdots mathbf hi1 cdots
mathop ELMocdots mathbf ci cdots
where mathbf hi0 and mathbf hi1 denote the outputs of first and second layers of ELMo encoder for i th word Following Peters2018ELMo we use a selfadjusted weighted average of mathbf hi0 mathbf hi1 beginsplit
mathbf s mathop softmaxmathbf w in mathbb R2
mathbf h gamma sum j01 sj mathbf hij in mathbb Rdcprime
endsplit where gamma in mathbb R and mathbf w in mathbb R2 are parameters tuned during training and dcprime is the dimension of the ELMo encoders outputs Then the result is fed to a feed forward network to reduce its dimension mathbf eic mathbf Wc mathbf hT mathbf bc in mathbb Rdc Eq 7 mathbf Wc in mathbb Rdcprime times dc and mathbf bc in mathbb Rdc are parameters The above Eq 7 gives ELMo embedding for the i th word The resulting word embeddings mathbf ei Eq 4 are sent to sentencelevel module The sentencelevel module is composed of stacked encoder blocks The block in each layer receives output of the previous layer as input and sends output to next layer It also sends its output to the pairlevel module Parameters in different layers are not the same We consider two encoder types convolutional type and recurrent type We only use one encoder type in one experiment For the sentencelevel module for different arguments Arg1 and Arg2 many previous works used same parameters to encode different arguments that is one encoder for two type arguments But as indicated by prasad2008penn Arg1 and Arg2 may have different semantic perspective we thus introduce argumentaware parameter settings for different arguments Figure 3 is the convolutional encoder block Suppose the input for the encoder block is mathbf xi i1 cdots N then mathbf xi in mathbb Rde The input is sent to a convolutional layer and mapped to output mathbf yi mathbf Ai mathbf Bi in mathbb R2de After the convolutional operation gated linear units GLU BIBREF29 is applied ie
mathbf zi mathbf Ai odot sigma mathbf Bi in mathbb Rde
There is also a residual connection Res 1 in the block which means adding the output of mathop GLU and the input of the block as final output so mathbf zi mathbf xi is the output of the block corresponding to the input mathbf xi The output mathbf zi mathbf xi for all i 1 cdots N is sent to both the next layer and the pairlevel module as input Similar to the convolutional one recurrent encoder block is shown in Figure 3 The input mathbf xi is encoded by a biGRU BIBREF30 layer first
mathbf yi mathop biGRUmathbf xi in mathbb R2de
then this is sent to a feed forword network mathbf zi mathbf Wr mathbf yiT mathbf br in mathbb Rde Eq 10 mathbf Wr in mathbb R2de times de and mathbf br in mathbb Rde are parameters There is also a similar residual connection Res 1 in the block so mathbf zi mathbf xi for all i 1 cdots N is the final output of the recurrent encoder block Through the sentencelevel module the word representations are contextualized and these contextualized representations of each layer are sent to pairlevel module Suppose the encoder block layer number is l and the outputs of j th block layer for Arg1 and Arg2 are mathbf v1j mathbf v2j in mathbb RN times de each row of which is the embedding for the corresponding word N is the length of word sequence sentence Each sentence is padded or truncated to let all sentences have the same length They are sent to a biattention module the attention matrix is
mathbf Mj mathop FFNmathbf v1j mathbf v2jT
in mathbb RN times N
mathop FFN is a feed froward network similar to Eq 10 applied to the last dimension corresponding to the word Then the projected representations are beginsplit
mathbf w2j mathop softmaxmathbf Mj mathbf v2j in mathbb RN times de
mathbf w1j mathop softmaxmathbf MjT mathbf v1j in mathbb RN times de
endsplit where the mathop softmax is applied to each row of the matrix We apply 2max pooling on each projected representation and concatenate them as output of the j th biattention module
mathbf oj mathop top2mathbf w1j mathop top2mathbf w2j
in mathbb R4 de
The number of max pooling operation top2 is selected from experiments and it is a balance of more salient features and less noise The final pair representation is mathbf o mathbf o1 mathbf o2 cdots mathbf ol in mathbb R4 l de Eq 12 Since the output is concatenated from different layers and the outputs of lower layers are sent directly to the final representation this also can be seen as residual connections Res 2 Then the output as Eq 12 is fed to an MLP classifier with softmax The parameters for biattention modules in different levels are shared We use two classifiers in our model One is for relation classification and another one is for connective classification The classifier is only a multiple layer perceptron MLP with softmax layer qinEtAl2017Long used adversarial method to utilize the connectives but this method is not suitable for our adopted attention module since the attended part of a sentence will be distinctly different when the argument is with and without connectives They also proposed a multitask method that augments the model with an additional classifier for connective prediction and the input of it is also the pair representation It is straightforward and simple enough and can help the model learn better representations so we include this module in our model The implicit connectives are provided by PDTB 20 dataset and the connective classifier is only used during training The loss function for both classifiers is cross entropy loss and the total loss is the sum of the two losses ie Loss Lossrelation Lossconnective Our model is evaluated on the benchmark PDTB 20 for two types of classification tasks PDTB 20 has three levels of senses Level1 Class Level2 Type and Level3 Subtypes The first level consists of four major relation Classes COMPARISON CONTINGENCY EXPANSION and TEMPORAL The second level contains 16 Types All our experiments are implemented by PyTorch The pretrained ELMo encoder is from AllenNLP toolkit BIBREF31 Following the settings of qinEtAl2017Long we use two splitting methods of PDTB dataset for comprehensive comparison The first is PDTBLin BIBREF3 which uses section 221 22 and 23 as training dev and test sets respectively The second is PDTBJi BIBREF8 which uses section 220 01 and 2122 as training dev and test sets respectively According to TACL536 five relation types have few training instances and no dev and test instance Removing the five types there remain 11 second level types During training instances with more than one annotated relation types are considered as multiple instances each of which has one of the annotations At test time a prediction that matches one of the gold types is considered as correct All sentences in the dataset are padded or truncated to keep the same 100word length For the results of both splitting methods we share some hyperparameters Table 1 is some of the shared hyperparameter settings The pretrained word embeddings are 300dim word2vec BIBREF32 pretrained from Google News So dw 300 ds 100 dc 300 then for the final embedding mathbf ei de 700 For the encoder block in sentencelevel module kernel size is same for every layer We use AdaGrad optimization BIBREF33 The encoder block layer number is different for the two splitting methods The layer number for PDTBJi splitting method is 4 and the layer number for PDTBLin splitting method is 5 Compared to other recent stateoftheart systems in Table 2 our model achieves new stateoftheart performance in two splitting methods with great improvements As to our best knowledge our model is the first one that exceeds the 48 accuracy in 11way classification Ablation Study To illustrate the effectiveness of our model and the contribution of each module we use the PTDBJi splitting method to do a group of experiments For the baseline model we use 4 layer stacked convolutional encoder blocks without the residual connection in the block with only pretrained word embeddings We only use the output of the last layer and the output is processed by 2max pooling without attention and sent to the relation classifier and connective classifier Without the two residual connections using 4 layers may be not the best for baseline model but is more convenient to comparison Firstly we add modules from high level to low level accumulatively to observe the performance improvement Table 3 is the results which demonstrate that every module has considerable effect on the performance Then we test the effects of the two residual connections on the performance The results are in Table 3 The baseline means baseline biattention ie the second row of Table 3 We find that Res 1 residual connection in the block is much more useful than Res 2 residual connection for pair representation and they work together can bring even better performance Without ELMo the same setting as 4th row in Table 3 our data settings is the same as qinEtAl2017Long whose performance was stateoftheart and will be compared directly We see that even without the pretrained ELMo encoder our performance is better which is mostly attributed to our better sentence pair representations SubwordLevel Embedding For the usefulness of subwordlevel embedding we compare its performance to a model with characterlevel embedding which was ever used in qinzhangzhao2016COLING We use the same model setting as the 4th row of Table 3 and then replace subword with character sequence The subword embedding augmented result is 4703 while the character embedding result is 4637 which verifies that the former is a better input representation for the task Parameters for SentenceLevel Module As previously discussed argument specific parameter settings may result in better sentencelevel encoders We use the model which is the same as the third row in Table 3 If shared parameters are used the result is 4597 which is lower than argument specific parameter settings 4629 The comparison shows argument specific parameter settings indeed capture the difference of argument representations and facilitate the sentence pair representation Encoder Block Type and Layer Number In section 33 we consider two encoder types here we compare their effects on the model performance Like the previous part The model setting is also the same as the third row in Table 3 except for the block type and layer number The results are shown in Figure 4 The results in the figure show that both types may reach similar level of top accuracies as the order of word is not important to the task We also try to add position information to the convolutional type encoder and receive a dropped accuracy This further verifies the order information does not matter too much for the task For most of the other numbers of layers the recurrent type shows better as the number of layers has an impact on the window size of convolutional encoders When convolutional type is used the training procedure is much faster but choosing the suitable kernel size needs extra efforts BiAttention We visualize the attention weight of one instance in Figure 5 For lower layers the attended part is more concentrated For higher layers the weights are more average and the attended part moves to the sentence border This is because the window size is bigger for higher layers and the convolutional kernel may have higher weights on words at the window edge Settings For the first level classification we perform both 4way classification and onevsothers binary classification Following the settings of previous works the dataset splitting method is the same as PDTBJi without removing instances The model uses 5 block layers with kernel size 3 other details are the same as that for 11way classification on PDTBJi Results Table 4 is the result comparison on first level classification For binary classification the result is computed by F1 score and for 4way classification the result is computed by macro average F1 score Our model gives the stateoftheart performance for 4way classification by providing an F1 score greater than 50 for the first time according to our best knowledge In this paper we propose a deeper neural model augmented by different grained text representations for implicit discourse relation recognition These different module levels work together and produce taskrelated representations of the sentence pair Our experiments show that the model is effective and achieve the stateoftheart performance As to our best knowledge this is the first time that an implicit discourse relation classifier gives an accuracy higher than 48 for 11way and an F1 score higher than 50 for 4way classification tasks </s>
<s>[INST] Detecting Potential Topics In News Using BERT, CRF and Wikipedia [/INST] NamedEntityRecognitionNER approaches can be categorised broadly in three types Detecting NER with predefined dictionaries and rulesBIBREF2 with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4 Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5 Authors have used approaches of Gibbs sampling and conditional random field CRF for nonlocal information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6 Deep learning approaches in NLP use document word or token representations instead of onehot encoded vectors With the rise of transfer learning pretrained Word2VecBIBREF7 GloVeBIBREF8 fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks RNN to detect NERs Using LSTM layers followed by CRF layes with pretrained wordembeddings as input has been explored hereBIBREF10 Also CNNs with character embeddings as inputs followed by bidirectional LSTM and CRF layers were explored hereBIBREF11 With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years Approach of using these pretrained models like ElmoBIBREF13 FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show stateoftheart performance There are very few approaches where caseless NER task is explored In this recent paperBIBREF16 authors have explored effects of Cased entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data producing a single model with high performance on both cased and uncased text In another paperBIBREF17 authors have proposed TrueCase pretraining before using BiLSTMCRF approach to detect NERs effectively Though it shows good results over previous approaches it is not useful in Indian Languages context as there is no concept of cases In our approach we are focusing more on data preparation for our definition of topics using some of the stateofart architectures based on BERT LSTMGRU and CRF layers as they have been explored in previous approaches mentioned above Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score And comparisons have been made with available and readytouse opensource libraries from the productionization perspective We need good amount of data to try deep learning stateoftheart algorithms There are lot of open datasets available for names locations organisations but not for topics as defined in Abstract above Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition But according to our definition we can use wikipedia titles as our target topics English wikipedia dataset has more than 18 million titles if we consider all versions of them till now We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily To remove such titles we deployed simple rules as follows Remove titles with common words are the which Remove titles with numeric values 29 101 Remove titles with technical components driver names transistor names X00 lga775 Remove 1gram titles except locations almost 80 of these also appear in remaining ngram titles After doing some more cleaning we were left with 10 million titles We have a dump of 15 million English news articles published in past 4 years Further we reduced number of articles by removing duplicate and near similar articles We used our pretrained doc2vec models and cosine similarity to detect almost similar news articles Then selected minimum articles required to cover all possible 2grams to 5grams This step is done to save some training time without loosing accuracy Do note that in future we are planning to use whole dataset and hope to see gains in F1 and Recall further But as per manual inspection our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities politicians local authorities nationallocal organisations and almost all locations India and International mentioned in the news text in last 4 years We then created a parallel corpus format as shown in Table 1 Using pretrained BertTokenizer from huggingface converted words in sentences to tokenes CaselessBERT pretrained tokenizer is used Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly For example in Table 1 second row word harassment is broken into har ass ment Similarly one NER tag is repeated three times to keep the length of sequencepair same Finally for around 3 million news articles parallel corpus is created which is of around 150 million sentences with around 3 billion words all lower cased and with around 5 billion tokens approximately We tried multiple variations of LSTM and GRU layes withwithout CRF layer There is a marginal gain in using GRU layers over LSTM Also we saw gain in using just one layers of GRU instead of more Finally we settled on the architecture shown in Figure 1 for the final training based on validation set scores with sample training set Text had to be tokenized using pytorchpretrainedbert as explained above before passing to the network Architecture is built using tensorflowkeras Coding inspiration taken from BERTkeras and for CRF layer kerascontrib If one is more comfortable in pytorch there are many examples available on github but pytorchbertcrfner is better for an easy start We used BERTMultilingual model so that we can train and finetune the same model for other Indian languages You can take BERTbase or BERTlarge for better performance with only English dataset Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pretraining and inferences Also we did not choose AutoML approach for hyperparameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well So instead chose and tweaked the parameters based on initial results We trained two models one with sequence length 512 to capture document level important ngrams and second with sequence length 64 to capture sentenceparagraph level important ngrams Through experiments it was evident that sequence length plays a vital role in deciding context and locallyglobally important ngrams Final output is a concatenation of both the model outputs Trained the topic model on single 32gb NVidiaV100 and it took around 50 hours to train the model with sequence length 512 We had to take 256gb ram machine to accommodate all data in memory for faster readwrite Also trained model with 64 sequence length in around 17 hours It is very important to note that sequence length decides how many berttokens you can pass for inference and also decides training time and accuracy Ideally more is better because inference would be faster as well For 64 sequence length we are moving 64token window over whole tokentext and recognising topics in each window So one should choose sequence length according to their use case Also we have explained before our motivation of choosing 2 separate sequence lengths models We stopped the training for both the models when it crossed 70 precision 90 recall on training and testing sets as we were just looking to get maximum recall and not bothered about precision in our case Both the models reach this point at around 16 epochs Comparison with existing opensource NER libraries is not exactly fair as they are NOT trained for detecting topics and important ngrams also NOT trained for caseless text But they are useful in testing and benchmarking if our model is detecting traditional NERs or not which it should capture as Wikipedia titles contains almost all Names Places and Organisation names You can check the sample output here Comparisons have been made among FlairNER StanfordcaselessNER used englishconll4classcaseless as it performed better than 3class and 7class SpacyNER and our models Of which only StanfordNER provides caseless models In Table 2 scores are calculated by taking traditional NER list as reference In Table 4 same is done with Wikipedia Titles reference set As you can see in Table 2 3 recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference WikipediaTitles and NER sets In capturing Wikipedia topics our model clearly surpasses other models in all scores Spacy results are good despite not being trained for caseless data In terms of F1 and overall stability Spacy did better than Stanford NER on our News Validation set Similarly Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall Flair overall performed poorly but as said before these opensource models are not trained for our particular usecase Lets check some examples for detailed analysis of the models and their results Following is the economy related news Example 1 around 115 trillion or around two percent of global gdp are lost to corruption every year president of the natural resource governance institute nrgi has said speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday daniel kaufmann president of nrgi presented the statistic result of a study by the nrgi an independent nonprofit organisation based in new york however according to kaufmann the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity xinhua news agency reported a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run the study showed it will also see similar gains in reducing infant mortality and improving education said kaufmann Detected NERs can be seen per model in Table 4 Our model do not capture numbers as we have removed all numbers from my wikititles as topics Reason behind the same is that we can easily write regex to detect currency prices time date and deep learning is not required for the same Following are few important ngrams only our models was able to capture capita income infant mortality international monetary fund annual meeting natural resource governance institute public governance At the same time we can see that Spacy did much better than Stanfordcaseless NER and Flair could not capture any of the NERs Another example of a news in political domain and detected NERs can be seen per model in Table 5 Example 2 wearing the aam aadmi partys trademark cap and with copies of the partys fiveyear report card in hand sunita kejriwal appears completely at ease its a cold winter afternoon in delhi as the former indian revenue service irs officer hits the campaign trail to support her husband and batchmate chief minister arvind kejriwal emerging from the background for the first time she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm then a political novice had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013 Correct ngrams captured only by our model are aam aadmi party aap bandwagon delhi assembly constituency giant killer indian revenue service political novice In this example Stanford model did better and captured names properly for example sheila dikshit which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words It is important to note that our model captures NERs with some additional words around them For example president of nrgi is detected by the model but not ngri But model output does convey more information than the later To capture the same for all models and to make comparison fair partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched This could be the reason for good score for Spacy Note that partial match is disabled for Wikipedia Titles match task as shown in Table 3 Here our model outperformed all the models Through this exercise we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in Related Work section We didnt use traditional datasets for model performance comparisons benchmarks As mentioned before all the comparisons are being done with opensource models and libraries from the productionization point of view We used a englishnews validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures We are trying out translationstransliterations of the EnglishWikiTitles to improve Indiclanguages entitytopics data This approach is also useful in building newssummarizing models as it detects almost all important ngrams present in the news Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion </s>
<s>[INST] Gender Bias in Coreference Resolution [/INST] There is a classic riddle A man and his son get into a terrible car crash The father dies and the boy is badly injured In the hospital the surgeon looks at the patient and exclaims I cant operate on this boy hes my son How can this be That a majority of people are reportedly unable to solve this riddle is taken as evidence of underlying implicit gender bias BIBREF0 many firsttime listeners have difficulty assigning both the role of mother and surgeon to the same entity As the riddle reveals the task of coreference resolution in English is tightly bound with questions of gender for humans and automated systems alike see Figure 1 As awareness grows of the ways in which datadriven AI technologies may acquire and amplify humanlike biases BIBREF1 BIBREF2 BIBREF3 this work investigates how gender biases manifest in coreference resolution systems There are many ways one could approach this question here we focus on gender bias with respect to occupations for which we have corresponding US employment statistics Our approach is to construct a challenge dataset in the style of Winograd schemas wherein a pronoun must be resolved to one of two previouslymentioned entities in a sentence designed to be easy for humans to interpret but challenging for datadriven systems BIBREF4 In our setting one of these mentions is a person referred to by their occupation by varying only the pronouns gender we are able to test the impact of gender on resolution With these Winogender schemas we demonstrate the presence of systematic gender bias in multiple publiclyavailable coreference resolution systems and that occupationspecific bias is correlated with employment statistics We release these test sentences to the public In our experiments we represent gender as a categorical variable with either two or three possible values female male and in some cases neutral These choices reflect limitations of the textual and realworld datasets we use In this work we evaluate three publiclyavailable offtheshelf coreference resolution systems representing three different machine learning paradigms rulebased systems featuredriven statistical systems and neural systems Our intent is to reveal cases where coreference systems may be more or less likely to recognize a pronoun as coreferent with a particular occupation based on pronoun gender as observed in Figure 1 To this end we create a specialized evaluation set consisting of 120 handwritten sentence templates in the style of the Winograd Schemas BIBREF4 Each sentence contains three referring expressions of interest We use a list of 60 oneword occupations obtained from Caliskan183 see supplement with corresponding gender percentages available from the US Bureau of Labor Statistics For each occupation we wrote two similar sentence templates one in which pronoun is coreferent with occupation and one in which it is coreferent with participant see Figure 2 For each sentence template there are three pronoun instantiations female male or neutral and two participant instantiations a specific participant eg the passenger and a generic paricipant someone With the templates fully instantiated the evaluation set contains 720 sentences 60 occupations times 2 sentence templates per occupation times 2 participants times 3 pronoun genders We evaluate examples of each of the three coreference system architectures described in Coreference Systems the BIBREF5 sieve system from the rulebased paradigm referred to as RULE BIBREF6 from the statistical paradigm STAT and the BIBREF11 deep reinforcement system from the neural paradigm NEURAL By multiple measures the Winogender schemas reveal varying degrees of gender bias in all three systems First we observe that these systems do not behave in a genderneutral fashion That is to say we have designed test sentences where correct pronoun resolution is not a function of gender as validated by human annotators but system predictions do exhibit sensitivity to pronoun gender 68 of malefemale minimal pair test sentences are resolved differently by the RULE system 28 for STAT and 13 for NEURAL Overall male pronouns are also more likely to be resolved as occupation than female or neutral pronouns across all systems for RULE 72 male vs 29 female and 1 neutral for STAT 71 male vs 63 female and 50 neutral and for NEURAL 87 male vs 80 female and 36 neutral Neutral pronouns are often resolved as neither occupation nor participant possibly due to the number ambiguity of theytheirthem When these systems predictions diverge based on pronoun gender they do so in ways that reinforce and magnify realworld occupational gender disparities Figure 4 shows that systems gender preferences for occupations correlate with realworld employment statistics US Bureau of Labor Statistics and the gender statistics from text BIBREF14 which these systems access directly correlation values are in Table 1 We also identify socalled gotcha sentences in which pronoun gender does not match the occupations majority gender BLS if occupation is the correct answer all systems perform worse on these gotchas See Table 2 Because coreference systems need to make discrete choices about which mentions are coreferent percentagewise differences in realworld statistics may translate into absolute differences in system predictions For example the occupation manager is 385 female in the US according to realworld statistics BLS mentions of manager in text are only 518 female BL resource and finally as viewed through the behavior of the three coreference systems we tested no managers are predicted to be female This illustrates two related phenomena first that datadriven NLP pipelines are susceptible to sequential amplification of bias throughout a pipeline and second that although the gender statistics from BL correlate with BLS employment statistics they are systematically maleskewed Figure 3 Here we give a brief and nonexhaustive overview of prior work on gender bias in NLP systems and datasets A number of papers explore gender bias in English word embeddings how they capture implicit human biases in modern BIBREF1 and historical BIBREF15 text and methods for debiasing them BIBREF16 Further work on debiasing models with adversarial learning is explored by DBLPjournalscorrBeutelCZC17 and zhang2018mitigating Prior work also analyzes social and gender stereotyping in existing NLP and vision datasets BIBREF17 BIBREF18 tatman2017EthNLP investigates the impact of gender and dialect on deployed speech recognition systems while zhaoEtAl2017EMNLP20173 introduce a method to reduce amplification effects on models trained with genderbiased datasets koolenvancranenburgh2017EthNLP examine the relationship between author gender and text attributes noting the potential for researcher interpretation bias in such studies Both larson2017EthNLP and koolenvancranenburgh2017EthNLP offer guidelines to NLP researchers and computational social scientists who wish to predict gender as a variable hovyspruit2016P162 introduce a helpful set of terminology for identifying and categorizing types of bias that manifest in AI systems including overgeneralization which we observe in our work here Finally we note independent but closely related work by zhaowang2018N181 published concurrently with this paper In their work zhaowang2018N181 also propose a Winograd schemalike test for gender bias in coreference resolution systems called WinoBias Though similar in appearance these two efforts have notable differences in substance and emphasis The contribution of this work is focused primarily on schema construction and validation with extensive analysis of observed system bias revealing its correlation with biases present in realworld and textual statistics by contrast zhaowang2018N181 present methods of debiasing existing systems showing that simple approaches such as augmenting training data with genderswapped examples or directly editing noun phrase counts in the BL resource are effective at reducing system bias as measured by the schemas Complementary differences exist between the two schema formulations Winogender schemas this work include genderneutral pronouns are syntactically diverse and are humanvalidated WinoBias includes and delineates sentences resolvable from syntax alone a Winogender schema has one occupational mention and one other participant mention WinoBias has two occupational mentions Due to these differences we encourage future evaluations to make use of both datasets We have introduced Winogender schemas a pronoun resolution task in the style of Winograd schemas that enables us to uncover gender bias in coreference resolution systems We evaluate three publiclyavailable offtheshelf systems and find systematic gender bias in each for many occupations systems strongly prefer to resolve pronouns of one gender over another We demonstrate that this preferential behavior correlates both with realworld employment statistics and the text statistics that these systems use We posit that these systems overgeneralize the attribute of gender leading them to make errors that humans do not make on this evaluation We hope that by drawing attention to this issue future systems will be designed in ways that mitigate genderbased overgeneralization It is important to underscore the limitations of Winogender schemas As a diagnostic test of gender bias we view the schemas as having high positive predictive value and low negative predictive value that is they may demonstrate the presence of gender bias in a system but not prove its absence Here we have focused on examples of occupational gender bias but Winogender schemas may be extended broadly to probe for other manifestations of gender bias Though we have used humanvalidated schemas to demonstrate that existing NLP systems are comparatively more prone to genderbased overgeneralization we do not presume that matching human judgment is the ultimate objective of this line of research Rather human judgements which carry their own implicit biases serve as a lower bound for equitability in automated systems The authors thank Rebecca Knowles and Chandler May for their valuable feedback on this work This research was supported by the JHU HLTCOE DARPA AIDA and NSFGRFP 1232825 The US Government is authorized to reproduce and distribute reprints for Governmental purposes The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the US Government </s>
<s>[INST] How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context [/INST] Semantic parsing which translates a natural language sentence into its corresponding executable logic form eg Structured Query Language SQL relieves users from the burden of learning techniques behind the logic form The majority of previous studies on semantic parsing assume that queries are contextindependent and analyze them in isolation However in reality users prefer to interact with systems in a dialogue where users are allowed to ask contextdependent incomplete questions BIBREF0 That arises the task of Semantic Parsing in Context SPC which is quite challenging as there are complex contextual phenomena In general there are two sorts of contextual phenomena in dialogues Coreference and Ellipsis BIBREF1 Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2 After the question What is id of the car with the max horsepower the user poses an elliptical question How about with the max mpg and a question containing pronouns Show its Make Only when completely understanding the context could a parser successfully parse the incomplete questions into their corresponding SQL queries A number of context modeling methods have been suggested in the literature to address SPC BIBREF3 BIBREF4 BIBREF2 BIBREF5 BIBREF6 These methods proposed to leverage two categories of context recent questions and precedent logic form It is natural to leverage recent questions as context Taking the example from Figure FIGREF1 when parsing Q3 we also need to take Q1 and Q2 as input We can either simply concatenate the input questions or use a model to encode them hierarchically BIBREF4 As for the second category instead of taking a bag of recent questions as input it only considers the precedent logic form For instance when parsing Q3 we only need to take S2 as context With such a context the decoder can attend over it or reuse it via a copy mechanism BIBREF4 BIBREF5 Intuitively methods that fall into this category enjoy better generalizability as they only rely on the last logic form as context no matter at which turn Notably these two categories of context can be used simultaneously However it remains unclear how far we are from effective context modeling First there is a lack of thorough comparisons of typical context modeling methods on complex SPC eg crossdomain Second none of previous works verified their proposed context modeling methods with the grammarbased decoding technique which has been developed for years and proven to be highly effective in semantic parsing BIBREF7 BIBREF8 BIBREF9 To obtain better performance it is worthwhile to study how context modeling methods collaborate with the grammarbased decoding Last but not the least there is limited understanding of how context modeling methods perform on various contextual phenomena An indepth analysis can shed light on potential research directions In this paper we try to fulfill the above insufficiency via an exploratory study on realworld semantic parsing in context Concretely we present a grammarbased decoding semantic parser and adapt typical context modeling methods on top of it Through experiments on two large complex crossdomain datasets SParC BIBREF2 and CoSQL BIBREF6 we carefully compare and analyze the performance of different context modeling methods Our best model achieves stateoftheart SOTA performances on both datasets with significant improvements Furthermore we summarize and generalize the most frequent contextual phenomena with a finegrained analysis on representative models Through the analysis we obtain some interesting findings which may benefit the community on the potential research directions We will opensource our code and materials to facilitate future work upon acceptance In the task of semantic parsing in context we are given a dataset composed of dialogues Denoting langle mathbf x1mathbf xnrangle a sequence of natural language questions in a dialogue langle mathbf y1mathbf ynrangle are their corresponding SQL queries Each SQL query is conditioned on a multitable database schema and the databases used in test do not appear in training In this section we first present a base model without considering context Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods Finally we present how to augment the model with BERT BIBREF10 We employ the popularly used attentionbased sequencetosequence architecture BIBREF11 BIBREF12 to build our base model As shown in Figure FIGREF6 the base model consists of a question encoder and a grammarbased decoder For each question the encoder provides contextual representations while the decoder generates its corresponding SQL query according to a predefined grammar To capture contextual information within a question we apply Bidirectional Long ShortTerm Memory Neural Network BiLSTM as our question encoder BIBREF13 BIBREF14 Specifically at turn i firstly every token xik in mathbf xi is fed into a word embedding layer mathbf phi x to get its embedding representation mathbf phi xxik On top of the embedding representation the question encoder obtains a contextual representation mathbf hEikmathop mathbf hoverrightarrowEikmathbf hoverleftarrowEik where the forward hidden state is computed as following The decoder is grammarbased with attention on the input question BIBREF7 Different from producing a SQL query word by word our decoder outputs a sequence of grammar rule ie action Such a sequence has onetoone correspondence with the abstract syntax tree of the SQL query Taking the SQL query in Figure FIGREF6 as an example it is transformed to the action sequence langle rm scriptstyle Startrightarrow rm Root rm scriptstyle Rootrightarrow rm Select Order rm scriptstyle Selectrightarrow rm Agg rm scriptstyle Aggrightarrow rm max Col Tab rm scriptstyle Colrightarrow rm Id rm scriptstyle Tabrightarrow rm CARSDATA rm scriptstyle Orderrightarrow rm desc limit Agg rm scriptstyle Aggrightarrow rm none Col Tab rm scriptstyle Colrightarrow rm Horsepower rm scriptstyle Tabrightarrow rm CARSDATA rangle by lefttoright depthfirst traversing on the tree At each decoding step a nonterminal is expanded using one of its corresponding grammar rules The rules are either schemaspecific eg rm scriptstyle Colrightarrow rm Horsepower or schemaagnostic eg rm scriptstyle Startrightarrow rm Root More specifically as shown at the top of Figure FIGREF6 we make a little modification on rm Orderrelated rules upon the grammar proposed by BIBREF9 which has been proven to have better performance than vanilla SQL grammar Denoting mathbf LSTMoverrightarrowD the unidirectional LSTM used in the decoder at each decoding step j of turn i it takes the embedding of the previous generated grammar rule mathbf phi yyij1 indicated as the dash lines in Figure FIGREF6 and updates its hidden state as where mathbf cij1 is the context vector produced by attending on each encoder hidden state mathbf hEik in the previous step where mathbf We is a learned matrix mathbf hoverrightarrowDi0 is initialized by the final encoder hidden state mathbf hEimathbf xi while mathbf ci0 is a zerovector For each schemaagnostic grammar rule mathbf phi y returns a learned embedding For schemaspecific one the embedding is obtained by passing its schema ie table or column through another unidirectional LSTM namely schema encoder mathbf LSTMoverrightarrowS For example the embedding of rm scriptstyle Colrightarrow rm Id is As for the output yij if the expanded nonterminal corresponds to schemaagnostic grammar rules we can obtain the output probability of action gamma as where mathbf Wo is a learned matrix When it comes to schemaspecific grammar rules the main challenge is that the model may encounter schemas never appeared in training due to the crossdomain setting To deal with it we do not directly compute the similarity between the decoder hidden state and the schemaspecific grammar rule embedding Instead we first obtain the unnormalized linking score lxikgamma between the kth token in mathbf xi and the schema in action gamma It is computed by both handcraft features eg word exact match BIBREF15 and learned similarity ie dot product between word embedding and grammar rule embedding With the input question as bridge we reuse the attention score aik in Equation DISPLAYFORM8 to measure the probability of outputting a schemaspecific action gamma as To take advantage of the question context we provide the base model with recent h questions as additional input As shown in Figure FIGREF13 we summarize and generalize three ways to incorporate recent questions as context The method concatenates recent questions with the current question in order making the input of the question encoder be mathbf xihdots mathbf xi while the architecture of the base model remains the same We do not insert special delimiters between questions as there are punctuation marks A dialogue can be seen as a sequence of questions which in turn are sequences of words Considering such hierarchy BIBREF4 employed a turnlevel encoder ie an unidirectional LSTM to encode recent questions hierarchically At turn i the turnlevel encoder takes the previous question vector mathbf hoverleftarrowEi11mathbf hoverrightarrowEi1mathbf xi1 as input and updates its hidden state to mathbf hoverrightarrowTi Then mathbf hoverrightarrowTi is fed into mathbf LSTME as an implicit context Accordingly Equation DISPLAYFORM4 is rewritten as Similar to Concat BIBREF4 allowed the decoder to attend over all encoder hidden states To make the decoder distinguish hidden states from different turns they further proposed a relative distance embedding phi d in attention computing Taking the above into account Equation DISPLAYFORM8 is as where tin 0dots h represents the relative distance To jointly model the decoder attention in tokenlevel and questionlevel inspired by the advances of opendomain dialogue area BIBREF16 we propose a gate mechanism to automatically compute the importance of each question The importance is computed by where lbrace mathbf Vgmathbf Wgmathbf Ugrbrace are learned parameters and 0le tle h As done in Equation DISPLAYFORM17 except for the relative distance embedding the decoder of Gate also attends over all the encoder hidden states And the questionlevel importance bargit is employed as the coefficient of the attention scores at turn it Besides recent questions as mentioned in Section SECREF1 the precedent SQL can also be context As shown in Figure FIGREF27 the usage of mathbf yi1 requires a SQL encoder where we employ another BiLSTM to achieve it The mth contextual action representation at turn i1 mathbf hAi1m can be obtained by passing the action sequence through the SQL encoder Attention over mathbf yi1 is a straightforward method to incorporate the SQL context Given mathbf hAi1m we employ a similar manner as Equation DISPLAYFORM8 to compute attention score and thus obtain the SQL context vector This vector is employed as an additional input for decoder in Equation DISPLAYFORM7 To reuse the precedent generated SQL BIBREF5 presented a tokenlevel copy mechanism on their nongrammar based parser Inspired by them we propose an actionlevel copy mechanism suited for grammarbased decoding It enables the decoder to copy actions appearing in mathbf yi1 when the actions are compatible to the current expanded nonterminal As the copied actions lie in the same semantic space with the generated ones the output probability for action gamma is a mix of generating mathbf g and copying mathbf c The generating probability Pyijgamma mathbf g follows Equation DISPLAYFORM10 and DISPLAYFORM11 while the copying probability is where mathbf Wl is a learned matrix Denoting Pcopyij the probability of copying at decoding step j of turn i it can be obtained by sigma mathbf Wcmathbf hoverrightarrowDijmathbf bc where lbrace mathbf Wcmathbf bcrbrace are learned parameters and sigma is the sigmoid function The final probability Pyijgamma is computed by Besides the actionlevel copy we also introduce a treelevel copy mechanism As illustrated in Figure FIGREF27 treelevel copy mechanism enables the decoder to copy action subtrees extracted from mathbf yi1 which shrinks the number of decoding steps by a large margin Similar idea has been proposed in a nongrammar based decoder BIBREF4 In fact a subtree is an action sequence starting from specific nonterminals such as rm Select To give an example langle rm scriptstyle Selectrightarrow rm Agg rm scriptstyle Aggrightarrow rm max Col Tab rm scriptstyle Colrightarrow rm Id rm scriptstyle Tabrightarrow rm CARSDATA rangle makes up a subtree for the tree in Figure FIGREF6 For a subtree upsilon its representation phi tupsilon is the final hidden state of SQL encoder which encodes its corresponding action sequence Then we can obtain the output probability of subtree upsilon as where mathbf Wt is a learned matrix The output probabilities of subtrees are normalized together with Equation DISPLAYFORM10 and DISPLAYFORM11 We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas We first concatenate the input question and all the schemas in a deterministic order with SEP as delimiter BIBREF17 For instance the input for Q1 in Figure FIGREF1 is What is id max horsepower SEP CARSNAMES SEP MakeId SEP Horsepower Feeding it into BERT we obtain the schemaaware question representations and questionaware schema representations These contextual representations are used to substitute phi x subsequently while other parts of the model remain the same We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC Section SECREF36 and further perform a finegrained analysis on various contextual phenomena Section SECREF40 Two large complex crossdomain datasets are used SParC BIBREF2 consists of 3034 422 dialogues for train development and CoSQL BIBREF6 consists of 2164 292 ones The average turn numbers of SParC and CoSQL are 30 and 52 respectively We evaluate each predicted SQL query using exact set match accuracy BIBREF2 Based on it we consider three metrics Question Match QuesMatch the match accuracy over all questions Interaction Match IntMatch the match accuracy over all dialogues and Turn i Match the match accuracy over questions at turn i Our implementation is based on PyTorch BIBREF18 AllenNLP BIBREF19 and the library transformers BIBREF20 We adopt the Adam optimizer and set the learning rate as 1e3 on all modules except for BERT for which a learning rate of 1e5 is used BIBREF21 The dimensions of word embedding action embedding and distance embedding are 100 while the hidden state dimensions of question encoder grammarbased decoder turnlevel encoder and SQL encoder are 200 We initialize word embedding using Glove BIBREF22 for nonBERT models For methods which use recent h questions h is set as 5 on both datasets We consider three models as our baselines SyntaxSQLcon and CDSeq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2 SyntaxSQLcon employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model analogous to our Turn BIBREF23 while CDSeq2Seq is adapted from BIBREF4 for crossdomain settings analogous to our TurnTree Copy EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and tokenlevel copy analogous to our TurnSQL AttnAction Copy Taking Concat as a representative we compare the performance of our model with other models as shown in Table TABREF34 As illustrated our model outperforms baselines by a large margin with or without BERT achieving new SOTA performances on both datasets Compared with the previous SOTA without BERT on SParC our model improves QuesMatch and IntMatch by 106 and 54 points respectively To conduct a thorough comparison we evaluate 13 different context modeling methods upon the same parser including 6 methods introduced in Section SECREF2 and 7 selective combinations of them eg ConcatAction Copy The experimental results are presented in Figure FIGREF37 Taken as a whole it is very surprising to observe that none of these methods can be consistently superior to the others The experimental results on BERTbased models show the same trend Diving deep into the methods only using recent questions as context we observe that Concat and Turn perform competitively outperforming Gate by a large margin With respect to the methods only using precedent SQL as context Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics In addition we observe that there is little difference in the performance of Action Copy and Concat which implies that using precedent SQL as context gives almost the same effect with using recent questions In terms of the combinations of different context modeling methods they do not significantly improve the performance as we expected As mentioned in Section SECREF1 intuitively methods which only use the precedent SQL enjoys better generalizability To validate it we further conduct an outofdistribution experiment to assess the generalizability of different context modeling methods Concretely we select three representative methods and train them on questions at turn 1 and 2 whereas test them at turn 3 4 and beyond As shown in Figure FIGREF38 Action Copy has a consistently comparable or better performance validating the intuition Meanwhile Concat appears to be strikingly competitive demonstrating it also has a good generalizability Compared with them Turn is more vulnerable to outofdistribution questions In conclusion existing context modeling methods in the task of SPC are not as effective as expected since they do not show a significant advantage over the simple concatenation method By a careful investigation on contextual phenomena we summarize them in multiple hierarchies Roughly there are three kinds of contextual phenomena in questions semantically complete coreference and ellipsis Semantically complete means a question can reflect all the meaning of its corresponding SQL Coreference means a question contains pronouns while ellipsis means the question cannot reflect all of its SQL even if resolving its pronouns In the finegrained level coreference can be divided into 5 types according to its pronoun BIBREF1 Ellipsis can be characterized by its intention continuation and substitution Continuation is to augment extra semantics eg rm Filter and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question Substitution can be further branched into 4 types explicit vs implicit and schema vs operator Explicit means the current question provides contextual clues ie partial context overlaps with the precedent question to help locate the substitution target while implicit does not On most cases the target is schema or operator In order to study the effect of context modeling methods on various phenomena as shown in Table TABREF39 we take the development set of SParC as an example to perform our analysis The analysis begins by presenting QuesMatch of three representative models on above finegrained types in Figure FIGREF42 As shown though different methods have different strengths they all perform poorly on certain types which will be elaborated below Diving deep into the coreference left of Figure FIGREF42 we observe that all methods struggle with two finegrained types definite noun phrases and one anaphora Through our study we find the scope of antecedent is a key factor An antecedent is one or more entities referred by a pronoun Its scope is either whole where the antecedent is the precedent answer or partial where the antecedent is part of the precedent question The abovementioned finegrained types are more challenging as their partial proportion are nearly 40 while for demonstrative pronoun it is only 22 It is reasonable as partial requires complex inference on context Considering the 4th example in Table TABREF39 one refers to pets instead of age because the accompanying verb is weigh From this observation we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context As for ellipsis right of Figure FIGREF42 we obtain three interesting findings by comparisons in three aspects The first finding is that all models have a better performance on continuation than substitution This is expected since there are redundant semantics in substitution while not in continuation Considering the 8th example in Table TABREF39 horsepower is a redundant semantic which may raise noise in SQL prediction The second finding comes from the unexpected drop from implicitsubstitution to explicitsubstitution Intuitively explicit should surpass implicit on substitution as it provides more contextual clues The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods Third compared with schemasubstitution operatorsubstitution achieves a comparable or better performance consistently We believe it is caused by the crossdomain setting which makes schema related substitution more difficult The most related work is the line of semantic parsing in context In the topic of SQL BIBREF24 proposed a contextindependent CCG parser and then applied it to do contextdependent substitution BIBREF3 applied a searchbased method for sequential questions and BIBREF4 provided the first sequencetosequence solution in the area More recently BIBREF5 presented a editbased method to reuse the precedent generated SQL With respect to other logic forms BIBREF25 focuses on understanding execution commands in context BIBREF26 on question answering over knowledge base in a conversation and BIBREF27 on code generation in environment context Our work is different from theirs as we perform an exploratory study not fulfilled by previous works There are also several related works that provided studies on context BIBREF17 explored the contextual representations in contextindependent semantic parsing and BIBREF28 studied how conversational agents use conversation history to generate response Different from them our task focuses on context modeling for semantic parsing Under the same task BIBREF1 summarized contextual phenomena in a coarsegrained level while BIBREF0 performed a wizardofoz experiment to study the most frequent phenomena What makes our work different from them is that we not only summarize contextual phenomena by finegrained types but also perform an analysis on context modeling methods This work conducts an exploratory study on semantic parsing in context to realize how far we are from effective context modeling Through a thorough comparison we find that existing context modeling methods are not as effective as expected A simple concatenation method can be much competitive Furthermore by performing a finegrained analysis we summarize two potential directions as our future work incorporating common sense for better pronouns inference and modeling contextual clues in a more explicit manner By opensourcing our code and materials we believe our work can facilitate the community to debug models in a finegrained level and make more progress </s>
<s>[INST] A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment Analysis [/INST] Aspect based sentiment analysis ABSA is a finegrained task in sentiment analysis which can provide important sentiment information for other natural language processing NLP tasks There are two different subtasks in ABSA namely aspectcategory sentiment analysis and aspectterm sentiment analysis BIBREF0 BIBREF1 Aspectcategory sentiment analysis aims at predicting the sentiment polarity towards the given aspect which is in predefined several categories and it may not appear in the sentence For instance in Table TABREF2 the aspectcategory sentiment analysis is going to predict the sentiment polarity towards the aspect food which is not appeared in the sentence By contrast the goal of aspectterm sentiment analysis is to predict the sentiment polarity over the aspect term which is a subsequence of the sentence For instance the aspectterm sentiment analysis will predict the sentiment polarity towards the aspect term The appetizers which is a subsequence of the sentence Additionally the number of categories of the aspect term is more than one thousand in the training corpus As shown in Table TABREF2 sentiment polarity may be different when different aspects are considered Thus the given aspect term is crucial to ABSA tasks BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 Besides BIBREF7 show that not all words of a sentence are useful for the sentiment prediction towards a given aspect term For instance when the given aspect is the service the words appetizers and ok are irrelevant for the sentiment prediction Therefore an aspectindependent weakly associative encoder may encode such background words eg appetizers and ok into the final representation which may lead to an incorrect prediction Numerous existing models BIBREF8 BIBREF9 BIBREF10 BIBREF1 typically utilize an aspectindependent encoder to generate the sentence representation and then apply the attention mechanism BIBREF11 or gating mechanism to conduct feature selection and extraction while feature selection and extraction may base on noised representations In addition some models BIBREF12 BIBREF13 BIBREF14 simply concatenate the aspect embedding with each word embedding of the sentence and then leverage conventional Long ShortTerm Memories LSTMs BIBREF15 to generate the sentence representation However it is insufficient to exploit the given aspect and conduct potentially complex feature selection and extraction To address this issue we investigate a novel architecture to enhance the capability of feature selection and extraction with the guidance of the given aspect from scratch Based on the deep transition Gated Recurrent Unit GRU BIBREF16 BIBREF17 BIBREF18 BIBREF19 an aspectguided GRU encoder is thus proposed which utilizes the given aspect to guide the sentence encoding procedure at the very beginning stage In particular we specially design an aspectgate for the deep transition GRU to control the information flow of each token input with the aim of guiding feature selection and extraction from scratch ie sentence representation generation Furthermore we design an aspectoriented objective to enforce our model to reconstruct the given aspect with the sentence representation generated by the aspectguided encoder We name this AspectGuided Deep Transition model as AGDT With all the above contributions our AGDT can accurately generate an aspectspecific representation for a sentence and thus conduct more accurate sentiment predictions towards the given aspect We evaluate the AGDT on multiple datasets of two subtasks in ABSA Experimental results demonstrate the effectiveness of our proposed approach And the AGDT significantly surpasses existing models with the same setting and achieves stateoftheart performance among the models without using additional features eg BERT BIBREF20 Moreover we also provide empirical and visualization analysis to reveal the advantages of our model Our contributions can be summarized as follows We propose an aspectguided encoder which utilizes the given aspect to guide the encoding of a sentence from scratch in order to conduct the aspectspecific feature selection and extraction at the very beginning stage We propose an aspectreconstruction approach to further guarantee that the aspectspecific information has been fully embedded into the sentence representation Our AGDT substantially outperforms previous systems with the same setting and achieves stateoftheart results on benchmark datasets compared to those models without leveraging additional features eg BERT As shown in Figure FIGREF6 the AGDT model mainly consists of three parts aspectguided encoder aspectreconstruction and aspect concatenated embedding The aspectguided encoder is specially designed to guide the encoding of a sentence from scratch for conducting the aspectspecific feature selection and extraction at the very beginning stage The aspectreconstruction aims to guarantee that the aspectspecific information has been fully embedded in the sentence representation for more accurate predictions The aspect concatenated embedding part is used to concatenate the aspect embedding and the generated sentence representation so as to make the final prediction The aspectguided encoder is the core module of AGDT which consists of two key components Aspectguided GRU and Transition GRU BIBREF16 AGRU Aspectguided GRU AGRU is a speciallydesigned unit for the ABSA tasks which is an extension of the LGRU proposed by BIBREF19 In particular we design an aspectgate to select aspectspecific representations through controlling the transformation scale of token embeddings at each time step At time step t the hidden state mathbf ht is computed as follows where odot represents elementwise product mathbf zt is the update gate BIBREF16 and widetildemathbf ht is the candidate activation which is computed as where mathbf gt denotes the aspectgate mathbf xt represents the input word embedding at time step t mathbf rt is the reset gate BIBREF16 textbf H1mathbf xt and textbf H2mathbf xt are the linear transformation of the input mathbf xt and mathbf lt is the linear transformation gate for mathbf xt BIBREF19 mathbf rt mathbf zt mathbf lt mathbf gt textbf H1mathbf xt and textbf H2mathbf xt are computed as where mathbf a denotes the embedding of the given aspect which is the same at each time step The update gate mathbf zt and reset gate mathbf rt are the same as them in the conventional GRU In Eq DISPLAYFORM9 sim the aspectgate mathbf gt controls both nonlinear and linear transformations of the input mathbf xt under the guidance of the given aspect at each time step Besides we also exploit a linear transformation gate mathbf lt to control the linear transformation of the input according to the current input mathbf xt and previous hidden state mathbf ht1 which has been proved powerful in the deep transition architecture BIBREF19 As a consequence AGRU can control both nonlinear transformation and linear transformation for input mathbf xt at each time step with the guidance of the given aspect ie AGRU can guide the encoding of aspectspecific features and block the aspectirrelevant information at the very beginning stage TGRU Transition GRU TGRU BIBREF17 is a crucial component of deep transition block which is a special case of GRU with only state as an input namely its input embedding is zero embedding As in Figure FIGREF6 a deep transition block consists of an AGRU followed by several TGRUs at each time step For the current time step t the output of one AGRUTGRU is fed into the next TGRU as the input The output of the last TGRU at time step t is fed into AGRU at the time step t1 For a TGRU each hidden state at both time step t and transition depth i is computed as where the update gate mathbf zti and the reset gate mathbf rti are computed as The AGDT encoder is based on deep transition cells where each cell is composed of one AGRU at the bottom followed by several TGRUs Such AGDT model can encode the sentence representation with the guidance of aspect information by utilizing the specially designed architecture We propose an aspectreconstruction approach to guarantee the aspectspecific information has been fully embedded in the sentence representation Particularly we devise two objectives for two subtasks in ABSA respectively In terms of aspectcategory sentiment analysis datasets there are only several predefined aspect categories While in aspectterm sentiment analysis datasets the number of categories of term is more than one thousand In a reallife scenario the number of term is infinite while the words that make up terms are limited Thus we design different lossfunctions for these two scenarios For the aspectcategory sentiment analysis task we aim to reconstruct the aspect according to the aspectspecific representation It is a multiclass problem We take the softmax crossentropy as the loss function where C1 is the number of predefined aspects in the training example yic is the groundtruth and pic is the estimated probability of a aspect For the aspectterm sentiment analysis task we intend to reconstruct the aspect term may consist of multiple words according to the aspectspecific representation It is a multilabel problem and thus the sigmoid crossentropy is applied where C2 denotes the number of words that constitute all terms in the training example yit is the groundtruth and pit represents the predicted value of a word Our aspectoriented objective consists of mathcal Lc and mathcal Lt which guarantee that the aspectspecific information has been fully embedded into the sentence representation The final loss function is as follows where the underlined part denotes the conventional loss function C is the number of sentiment labels yi is the groundtruth and pi represents the estimated probability of the sentiment label mathcal L is the aspectoriented objective where Eq DISPLAYFORM14 is for the aspectcategory sentiment analysis task and Eq DISPLAYFORM15 is for the aspectterm sentiment analysis task And lambda is the weight of mathcal L As shown in Figure FIGREF6 we employ the aspect reconstruction approach to reconstruct the aspect term where softmax is for the aspectcategory sentiment analysis task and sigmoid is for the aspectterm sentiment analysis task Additionally we concatenate the aspect embedding on the aspectguided sentence representation to predict the sentiment polarity Under that loss function Eq DISPLAYFORM17 the AGDT can produce aspectspecific sentence representations We conduct experiments on two datasets of the aspectcategory based task and two datasets of the aspectterm based task For these four datasets we name the full dataset as DS In each DS there are some sentences like the example in Table TABREF2 containing different sentiment labels each of which associates with an aspect term For instance Table TABREF2 shows the customers different attitude towards two aspects food The appetizers and service In order to measure whether a model can detect different sentiment polarities in one sentence towards different aspects we extract a hard dataset from each DS named HDS in which each sentence only has different sentiment labels associated with different aspects When processing the original sentence s that has multiple aspects a1a2an and corresponding sentiment labels l1l2ln n is the number of aspects or terms in a sentence the sentence will be expanded into s a1 l1 s a2 l2 s an ln in each dataset BIBREF21 BIBREF22 BIBREF1 ie there will be n duplicated sentences associated with different aspects and labels For comparison we follow BIBREF1 and use the restaurant reviews dataset of SemEval 2014 restaurant14 Task 4 BIBREF0 to evaluate our AGDT model The dataset contains five predefined aspects and four sentiment labels A large dataset restaurantlarge involves restaurant reviews of three years ie 2014 sim 2016 BIBREF0 There are eight predefined aspects and three labels in that dataset When creating the restaurantlarge dataset we follow the same procedure as in BIBREF1 Statistics of datasets are shown in Table TABREF19 We use the restaurant and laptop review datasets of SemEval 2014 Task 4 BIBREF0 to evaluate our model Both datasets contain four sentiment labels Meanwhile we also conduct a threeclass experiment in order to compare with some work BIBREF13 BIBREF3 BIBREF7 which removed conflict labels Statistics of both datasets are shown in Table TABREF20 The evaluation metrics are accuracy All instances are shown in Table TABREF19 and Table TABREF20 Each experiment is repeated five times The mean and the standard deviation are reported We use the pretrained 300d Glove embeddings BIBREF23 to initialize word embeddings which is fixed in all models For outofvocabulary words we randomly sample their embeddings by the uniform distribution U025 025 Following BIBREF8 BIBREF24 BIBREF25 we take the averaged word embedding as the aspect representation for multiword aspect terms The transition depth of deep transition model is 4 see Section SECREF30 The hidden size is set to 300 We set the dropout rate BIBREF26 to 05 for input token embeddings and 03 for hidden states All models are optimized using Adam optimizer BIBREF27 with gradient clipping equals to 5 BIBREF28 The initial learning rate is set to 001 and the batch size is set to 4096 at the token level The weight of the reconstruction loss lambda in Eq DISPLAYFORM17 is finetuned see Section SECREF30 and respectively set to 04 04 02 and 05 for four datasets To comprehensively evaluate our AGDT we compare the AGDT with several competitive models ATAELSTM It is an attentionbased LSTM model It appends the given aspect embedding with each word embedding and then the concatenated embedding is taken as the input of LSTM The output of LSTM is appended aspect embedding again Furthermore attention is applied to extract features for final predictions CNN This model focuses on extracting ngram features to generate sentence representation for the sentiment classification TDLSTM This model uses two LSTMs to capture the left and right context of the term to generate targetdependent representations for the sentiment prediction IAN This model employs two LSTMs and interactive attention mechanism to learn representations of the sentence and the aspect and concatenates them for the sentiment prediction RAM This model applies multiple attentions and memory networks to produce the sentence representation GCAE It uses CNNs to extract features and then employs two Gated TanhRelu units to selectively output the sentiment information flow towards the aspect for predicting sentiment labels We present the overall performance of our model and baseline models in Table TABREF27 Results show that our AGDT outperforms all baseline models on both restaurant14 and restaurantlarge datasets ATAELSTM employs an aspectweakly associative encoder to generate the aspectspecific sentence representation by simply concatenating the aspect which is insufficient to exploit the given aspect Although GCAE incorporates the gating mechanism to control the sentiment information flow according to the given aspect the information flow is generated by an aspectindependent encoder Compared with GCAE our AGDT improves the performance by 24 and 16 in the DS part of the two dataset respectively These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspectguided sentence representation and thus conduct accurate sentiment prediction Our model benefits from the following aspects First our AGDT utilizes an aspectguided encoder which leverages the given aspect to guide the sentence encoding from scratch and generates the aspectguided representation Second the AGDT guarantees that the aspectspecific information has been fully embedded in the sentence representation via reconstructing the given aspect Third the given aspect embedding is concatenated on the aspectguided sentence representation for final predictions The HDS which is designed to measure whether a model can detect different sentiment polarities in a sentence consists of replicated sentences with different sentiments towards multiple aspects Our AGDT surpasses GCAE by a very large margin 114 and 49 respectively on both datasets This indicates that the given aspect information is very pivotal to the accurate sentiment prediction especially when the sentence has different sentiment labels which is consistent with existing work BIBREF2 BIBREF3 BIBREF4 Those results demonstrate the effectiveness of our model and suggest that our AGDT has better ability to distinguish the different sentiments of multiple aspects compared to GCAE As shown in Table TABREF28 our AGDT consistently outperforms all compared methods on both domains In this task TDLSTM and ATAELSTM use a aspectweakly associative encoder IAN RAM and GCAE employ an aspectindependent encoder In the DS part our AGDT model surpasses all baseline models which shows that the inclusion of AGRU aspectguided encoder aspectreconstruction and aspect concatenated embedding has an overall positive impact on the classification process In the HDS part the AGDT model obtains 36 higher accuracy than GCAE on the restaurant domain and 42 higher accuracy on the laptop domain which shows that our AGDT has stronger ability for the multisentiment problem against GCAE These results further demonstrate that our model works well across tasks and datasets We conduct ablation experiments to investigate the impacts of each part in AGDT where the GRU is stacked with 4 layers Here AC represents aspect concatenated embedding AG stands for AGRU Eq DISPLAYFORM8 sim and AR denotes the aspectreconstruction Eq DISPLAYFORM14 sim DISPLAYFORM17 From Table TABREF31 and Table TABREF32 we can conclude Deep Transition DT achieves superior performances than GRU which is consistent with previous work BIBREF18 BIBREF19 2 vs 1 Utilizing AG to guide encoding aspectrelated features from scratch has a significant impact for highly competitive results and particularly in the HDS part which demonstrates that it has the stronger ability to identify different sentiment polarities towards different aspects 3 vs 2 Aspect concatenated embedding can promote the accuracy to a degree 4 vs 3 The aspectreconstruction approach AR substantially improves the performance especially in the HDS part 5 vs 4 the results in 6 show that all modules have an overall positive impact on the sentiment classification We have demonstrated the effectiveness of the AGDT Here we investigate the impact of model depth of AGDT varying the depth from 1 to 6 Table TABREF39 shows the change of accuracy on the test sets as depth increases We find that the best results can be obtained when the depth is equal to 4 at most case and further depth do not provide considerable performance improvement Here we investigate how well the AGDT can reconstruct the aspect information For the aspectterm reconstruction we count the construction is correct when all words of the term are reconstructed Table TABREF40 shows all results on four test datasets which shows the effectiveness of aspectreconstruction approach again We randomly sample a temporary development set from the HDS part of the training set to choose the lambda for each dataset And we investigate the impact of lambda for aspectoriented objectives Specifically lambda is increased from 01 to 10 Figure FIGREF33 illustrates all results on four HDS datasets which show that reconstructing the given aspect can enhance aspectspecific sentiment features and thus obtain better performances We also conduct a threeclass experiment to compare our AGDT with previous models ie IARM TNet VAE PBAN AOA and MGAN in Table TABREF41 These previous models are based on an aspectindependent weakly associative encoder to generate sentence representations Results on all domains suggest that our AGDT substantially outperforms most competitive models except for the TNet on the laptop dataset The reason may be TNet incorporates additional features eg position features local ngrams and wordlevel features compared to ours only wordlevel features To give an intuitive understanding of how the proposed AGRU works from scratch with different aspects we take a review sentence as an example As the example the appetizers are ok but the service is slow shown in Table TABREF2 it has different sentiment labels towards different aspects The color depth denotes the semantic relatedness level between the given aspect and each word More depth means stronger relation to the given aspect Figure FIGREF43 shows that the AGRU can effectively guide encoding the aspectrelated features with the given aspect and identify corresponding sentiment In another case overpriced Japanese food with mediocre service there are two extremely strong sentiment words As the above of Figure FIGREF44 shows our AGRU generates almost the same weight to the word overpriced and mediocre The bottom of Figure FIGREF44 shows that reconstructing the given aspect can effectively enhance aspectspecific sentiment features and produce correct sentiment predictions We further investigate the errors from AGDT which can be roughly divided into 3 types 1 The decision boundary among the sentiment polarity is unclear even the annotators can not sure what sentiment orientation over the given aspect in the sentence 2 The conflictneutral instances are extremely easily misclassified as positive or negative due to the imbalanced label distribution in training corpus 3 The polarity of complex instances is hard to predict such as the sentence that express subtle emotions which are hardly effectively captured or containing negation words eg never less and not which easily affect the sentiment polarity There are kinds of sentiment analysis tasks such as documentlevel BIBREF34 sentencelevel BIBREF35 BIBREF36 aspectlevel BIBREF0 BIBREF37 and multimodal BIBREF38 BIBREF39 sentiment analysis For the aspectlevel sentiment analysis previous work typically apply attention mechanism BIBREF11 combining with memory network BIBREF40 or gating units to solve this task BIBREF8 BIBREF41 BIBREF42 BIBREF1 BIBREF43 BIBREF44 BIBREF45 BIBREF46 where an aspectindependent encoder is used to generate the sentence representation In addition some work leverage the aspectweakly associative encoder to generate aspectspecific sentence representation BIBREF12 BIBREF13 BIBREF14 All of these methods make insufficient use of the given aspect information There are also some work which jointly extract the aspect term and opinion term and predict its sentiment polarity BIBREF47 BIBREF48 BIBREF49 BIBREF50 BIBREF51 BIBREF52 BIBREF53 BIBREF54 BIBREF55 In this paper we focus on the latter problem and leave aspect extraction BIBREF56 to future work And some work BIBREF57 BIBREF58 BIBREF59 BIBREF30 BIBREF60 BIBREF51 employ the wellknown BERT BIBREF20 or documentlevel corpora to enhance ABSA tasks which will be considered in our future work to further improve the performance Deep transition has been proved its superiority in language modeling BIBREF17 and machine translation BIBREF18 BIBREF19 We follow the deep transition architecture in BIBREF19 and extend it by incorporating a novel AGRU for ABSA tasks In this paper we propose a novel aspectguided encoder AGDT for ABSA tasks based on a deep transition architecture Our AGDT can guide the sentence encoding from scratch for the aspectspecific feature selection and extraction Furthermore we design an aspectreconstruction approach to enforce AGDT to reconstruct the given aspect with the generated sentence representation Empirical studies on four datasets suggest that the AGDT outperforms existing stateoftheart models substantially on both aspectcategory sentiment analysis task and aspectterm sentiment analysis task of ABSA without additional features We sincerely thank the anonymous reviewers for their thorough reviewing and insightful suggestions Liang Xu and Chen are supported by the National Natural Science Foundation of China Contract 61370130 61976015 61473294 and 61876198 and the Beijing Municipal Natural Science Foundation Contract 4172047 and the International Science and Technology Cooperation Program of the Ministry of Science and Technology K11F100010 </s>
<s>[INST] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization [/INST] Automatic document summarization is the task of rewriting a document into its shorter form while still retaining its important content Over the years many paradigms for document summarization have been explored see NenkovaMcKeown2011 for an overview The most popular two among them are extractive approaches and abstractive approaches As the name implies extractive approaches generate summaries by extracting parts of the original document usually sentences while abstractive methods may generate new words or phrases which are not in the original document Extractive summarization is usually modeled as a sentence ranking problem with length constraints eg max number of words or sentences Top ranked sentences under constraints are selected as summaries Early attempts mostly leverage manually engineered features BIBREF1 Based on these sparse features sentence are selected using a classifier or a regression model Later the feature engineering part in this paradigm is replaced with neural networks cheng2016acl propose a hierarchical long shortterm memory network LSTM BIBREF2 to encode a document and then use another LSTM to predict binary labels for each sentence in the document This architecture is widely adopted recently BIBREF3 BIBREF4 BIBREF5 Our model also employs a hierarchical document encoder but we adopt a hierarchical transformer BIBREF6 rather a hierarchical LSTM Because recent studies BIBREF6 BIBREF0 show the transformer model performs better than LSTM in many tasks Abstractive models do not attract much attention until recently They are mostly based on sequence to sequence seq2seq models BIBREF7 where a document is viewed a sequence and its summary is viewed as another sequence Although seq2seq based summarizers can be equipped with copy mechanism BIBREF8 BIBREF9 coverage model BIBREF9 and reinforcement learning BIBREF10 there is still no guarantee that the generated summaries are grammatical and convey the same meaning as the original document does It seems that extractive models are more reliable than their abstractive counterparts However extractive models require sentence level labels which are usually not included in most summarization datasets most datasets only contain documentsummary pairs Sentence labels are usually obtained by rulebased methods eg maximizing the ROUGE score between a set of sentences and reference summaries and may not be accurate Extractive models proposed recently BIBREF11 BIBREF3 employ hierarchical document encoders and even have neural decoders which are complex Training such complex neural models with inaccurate binary labels is challenging We observed in our initial experiments on one of our dataset that our extractive model see Section Extractive Summarization for details overfits to the training set quickly after the second epoch which indicates the training set may not be fully utilized Inspired by the recent pretraining work in natural language processing BIBREF12 BIBREF13 BIBREF0 our solution to this problem is to first pretrain the complex part ie the hierarchical encoder of the extractive model on unlabeled data and then we learn to classify sentences with our model initialized from the pretrained encoder In this paper we propose Hibert which stands for HIerachical Bidirectional Encoder Representations from Transformers We design an unsupervised method to pretrain Hibert for document modeling We apply the pretrained Hibert to the task of document summarization and achieve stateoftheart performance on both the CNNDailymail and New York Times dataset In this section we introduce work on extractive summarization abstractive summarization and pretrained natural language processing models For a more comprehensive review of summarization we refer the interested readers to NenkovaMcKeown2011 and Mani01 In this section we present our model Hibert We first introduce how documents are represented in Hibert We then describe our method to pretrain Hibert and finally move on to the application of Hibert to summarization Let mathcal D S1 S2 dots S mathcal D denote a document where Si w1i w2i dots wSii is a sentence in mathcal D and wji a word in Si Note that following common practice in natural language processing literatures wSii is an artificial EOS End Of Sentence token To obtain the representation of mathcal D we use two encoders a sentence encoder to transform each sentence in mathcal D to a vector and a document encoder to learn sentence representations given their surrounding sentences as context Both the sentence encoder and document encoder are based on the Transformer encoder described in vaswani2017nips As shown in Figure 1 they are nested in a hierarchical fashion A transformer encoder usually has multiple layers and each layer is composed of a multihead self attentive sublayer followed by a feedforward sublayer with residual connections BIBREF30 and layer normalizations BIBREF31 For more details of the Transformer encoder we refer the interested readers to vaswani2017nips To learn the representation of Si Si w1i w2i dots wSii is first mapped into continuous space beginsplit
mathbf Ei mathbf e1i mathbf e2i dots mathbf eSii
quad quad textwhere quad mathbf eji ewji mathbf pj
endsplit Eq 6 where ewji and mathbf pj are the word and positional embeddings of wji respectively The word embedding matrix is randomly initialized and we adopt the sinecosine positional embedding BIBREF6 Then the sentence encoder a Transformer transforms mathbf Ei into a list of hidden representations mathbf h1i mathbf h2i dots mathbf hSii We take the last hidden representation mathbf hSii ie the representation at the EOS token as the representation of sentence Si Similar to the representation of each word in Si we also take the sentence position into account The final representation of Si is hatmathbf hi mathbf hSii mathbf pi Eq 8 Note that words and sentences share the same positional embedding matrix In analogy to the sentence encoder as shown in Figure 1 the document encoder is yet another Transformer but applies on the sentence level After running the Transformer on a sequence of sentence representations hatmathbf h1 hatmathbf h2 dots hatmathbf hmathcal D we obtain the context sensitive sentence representations mathbf d1 mathbf d2 dots mathbf dmathcal D Now we have finished the encoding of a document with a hierarchical bidirectional transformer encoder Hibert Note that in previous work document representation are also learned with hierarchical models but each hierarchy is a Recurrent Neural Network BIBREF3 BIBREF21 or Convolutional Neural Network BIBREF11 We choose the Transformer because it outperforms CNN and RNN in machine translation BIBREF6 semantic role labeling BIBREF32 and other NLP tasks BIBREF0 In the next section we will introduce how we train Hibert with an unsupervised training objective Most recent encoding neural models used in NLP eg RNNs CNNs or Transformers can be pretrained by predicting a word in a sentence or a text span using other words within the same sentence or span For example ELMo BIBREF12 and OpenAIGPT BIBREF13 predict a word using all words on its left or right while word2vec BIBREF33 predicts one word with its surrounding words in a fixed window and BERT BIBREF0 predicts masked missing words in a sentence given all the other words All the models above learn the representation of a sentence where its basic units are words Hibert aims to learn the representation of a document where its basic units are sentences Therefore a natural way of pretraining a document level model eg Hibert is to predict a sentence or sentences instead of a word or words We could predict a sentence in a document with all the sentences on its left or right as in a document level language model However in summarization context on both directions are available We therefore opt to predict a sentence using all sentences on both its left and right Specifically suppose mathcal D S1 S2 dots S mathcal D is a document where Si w1i w2i dots wSii is a sentence in it We randomly select 15 of the sentences in mathcal D and mask them Then we predict these masked sentences The prediction task here is similar with the Cloze task BIBREF34 BIBREF0 but the missing part is a sentence However during test time the input document is not masked to make our model can adapt to documents without masks we do not always mask the selected sentences Once a sentence is selected as one of the 15 selected masked sentences we transform it with one of three methods below We will use an example to demonstrate the transformation For instance we have the following document and the second sentence is selected William Shakespeare is a poet He died in 1616 He is regarded as the greatest writer In 80 of the cases we mask the selected sentence ie we replace each word in the sentence with a mask token MASK The document above becomes William Shakespeare is a poet MASK MASK MASK MASK MASK He is regarded as the greatest writer where He died in 1616 is masked In 10 of the cases we keep the selected sentence as it is This strategy is to simulate the input document during test time with no masked sentences In the rest 10 cases we replace the selected sentence with a random sentence In this case the document after transformation is William Shakespeare is a poet Birds can fly He is regarded as the greatest writer The second sentence is replaced with Birds can fly This strategy intends to add some noise during training and make the model more robust After the application of the above procedures to a document mathcal D S1 S2 dots S mathcal D we obtain the masked document widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D Let mathcal K denote the set of indicies of selected sentences in mathcal D Now we are ready to predict the masked sentences mathcal M lbrace Sk k in mathcal K rbrace using widetilde mathcal D We first apply the hierarchical encoder Hibert in Section Conclusions to widetilde mathcal D and obtain its context sensitive sentence representations tilde mathbf d1 tilde mathbf d2 dots tilde mathbf d mathcal D We will demonstrate how we predict the masked sentence Sk w0k w1k w2k dots wSkk one word per step w0k is an artificially added BOS token At the widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 0 th step we predict widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 1 given widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 2 and widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 3 widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 4 already encodes the information of widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 5 with a focus around its widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 6 th sentence widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 7 As shown in Figure 1 we employ a Transformer decoder BIBREF6 to predict widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 8 with widetilde mathcal D tildeS1 tildeS2 dots tildeS mathcal D 9 as its additional input The transformer decoder we used here is slightly different from the original one The original decoder employs two multihead attention layers to include both the context in encoder and decoder while we only need one to learn the decoder context since the context in encoder is a vector ie mathcal K 0 Specifically after applying the word and positional embeddings to mathcal K 1 we obtain mathcal K 2 also see Equation 6 Then we apply multihead attention sublayer to mathcal K 3 beginsplit
tildemathbf hj1 textMultiHeadmathbf qj1 mathbf Kj1 mathbf Vj1
mathbf qj1 mathbf WQ tildemathbf ej1k
mathbf Kj1 mathbf WK widetilde mathbf E k1j1